path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
SupervisedLearning/Tutorial01_RegressaoLinear_KNN.ipynb | ###Markdown
Aprendizado SupervisionadoVamos começar o estudo da aprendizagem de máquina com o tipo de aprendizado denominado de **Supervisionado**. > * **Aprendizado Supervisionado (*Supervised Learning*):** A training set of examples with the correct responses (targets) is provided and, based on this training set, the algorithm generalises to respond correctly to all possible inputs. This also colled learning from exemplars.Um algoritmo supervisionado é uma função que, dado um conjunto de exemplos rotulados, constrói um *preditor*. Os rótulos atribuídos aos exemplos são definidos a partir de um domínio conhecido. Se este domínio for um conjunto de valores nominais, estamos lidando com um problema de *classificação*. Agora se este domínio for um conjunto infinito e ordenado de valores, passamos a lidar com um problema de *regressão*. O preditor construído recebe nomes distintos a depender da tarefa. Chamamos de classificador (para o primeiro tipo de rótulo) ou regressor (para o segundo tipo).Um classificador (ou regressor) também é uma função que recebe um exemplo não rotulado e é capaz de definir um rótulo dentro dos valores possíveis. Se estivermos trabalhando com um problema de regressão este rótulo está dentro do intervalo real assumido no problema. Se for uma tarefa de classificação, esse rótulo é uma das classes definidas.Podemos definir formalmente da seguinte maneira, segundo (FACELI, et. al, 2011):*Uma definição formal seria, dado um conjunto de observações de pares $D=\{(x_i, f(x_i)), i = 1, ..., n\}$, em que $f$ representa uma função desconhecida, um algoritmo de AM preditivo (ou supervisionado) aprende uma aproximação $f'$ da função desconhecida $f$. Essa função aproximada, $f'$, permite estimar o valor de $f$ para novas observações de $x$.*Temos duas situações para $f$:* **Classificação:** $y_i = f(x_i) \in \{c_1,...,c_m\}$, ou seja, $f(x_i)$ assume valores em um conjunto discreto, não ordenado;* **Regressão:** $y_i = f(x_i) \in R$, ou seja, $f(x_i)$ assume valores em um cojunto infinito e ordenado de valores. Regressão LinearVamos mostrar como a regressão funciona através de um método denominado regressão linear. Esse tutorial é baseado em 3 materiais: * Tutorial de Regressão Linear: https://github.com/justmarkham/DAT4/blob/master/notebooks/08_linear_regression.ipynb por * Slides sobre Regressão Linear: http://pt.slideshare.net/perone/intro-ml-slides20min* Cap. 3 do Livro "An Introduction to Statistical Learning" disponível em: http://www-bcf.usc.edu/~gareth/ISL/* Livro "Inteligência Artificial - Uma Abordagem de Aprendizado de Máquina" disponível em: https://www.amazon.com.br/dp/8521618808/ref=cm_sw_r_tw_dp_x_MiGdybV5B9TTTPara o nosso trabalho, vamos trabalhar com a base de *Advertising* disponibilizada pelo livro *"An Introduction to Statistical Learning"*. Essa base consiste de 3 atributos que representam os gastos de propaganda (em milhares de dólares) de um determinado produto na TV, Rádio e Jornal. Além disso, é conhecido a quantidade de vendas realizadas (em milhares de unidades) para cada instância. Vamos explorar a base de ados a seguir:
###Code
# Imports necessários para a parte de Regressão Linear
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
%matplotlib inline
###Output
_____no_output_____
###Markdown
O primeiro passo é carregar a base de dados. Ela está disponível a partir do link: http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv. Para carregar a base vamos utilizar a biblioteca [**Pandas**](http://pandas.pydata.org/). Detalhes dessa biblioteca estão fora do escopo destes tutoriais. Sendo assim, apenas vamos usá-la sem tecer muitos detalhes sobre as operações realizadas. Basicamente, vamos utiliza-la para carregar a base de arquivos e plotar dados nos gráficos. Mais informações podem ser encontradas na documentação da biblioteca.
###Code
# Carrega a base e imprime os dez primeiros registros da base
data = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", index_col=0)
print(data.head(10))
###Output
TV radio newspaper sales
1 230.1 37.8 69.2 22.1
2 44.5 39.3 45.1 10.4
3 17.2 45.9 69.3 9.3
4 151.5 41.3 58.5 18.5
5 180.8 10.8 58.4 12.9
6 8.7 48.9 75.0 7.2
7 57.5 32.8 23.5 11.8
8 120.2 19.6 11.6 13.2
9 8.6 2.1 1.0 4.8
10 199.8 2.6 21.2 10.6
###Markdown
O *dataset* possui 3 atributos: *TV*, *Radio* e *Newspaper*. Cada um deles corresponde a quantidade de dólares gastos em propaganda em cada uma das mídias para um produto específico. Já a responsta (*Sales*) consiste da quantidade de produtos vendidos para cada produto. Esse *dataset* possui 200 instâncias.Para melhor visualizar, vamos plotar as informações da base de dados.
###Code
fig, axs = plt.subplots(1, 3, sharey=True)
data.plot(kind='scatter', x='TV', y='sales', ax=axs[0], figsize=(16, 8))
data.plot(kind='scatter', x='radio', y='sales', ax=axs[1])
data.plot(kind='scatter', x='newspaper', y='sales', ax=axs[2])
###Output
_____no_output_____
###Markdown
O nosso objetivo é analisar os dados e tirar certas conclusões a partir deles. Basicamente, queremos responder as seguintes perguntas:*** Com base nestes dados, como poderíamos gastar o dinheiro designado para propaganda no futuro? ***Em outras palavras:* *Existe uma relação entre o dinheiro gasto em propaganda e a quantidade de vendas?** *Quão forte é esse relacionamento?** *Quais são os tipos de propaganda que contribuem para as vendas?** *Qual o efeito de cada tipo de propaganda nas vendas?** *Dado um gasto específico em propaganda, é possível prever quanto será vendido?*Para explorar essas e outras questões, vamos utilizar, inicialmente, uma **Regressão Linear Simples**. Regressão Linear SimplesComo o próprio nome diz, a regressão linear simples é um método muito (muito++) simples para prever valores **(Y)** a partir de uma única variável **(X)**. Para este modelo, é assumido que existe uma aproximação linear entre X e Y. Matematicamente, podemos escrever este relacionamento a partir da seguinte função: $Y \approx \beta_0 + \beta_1X$, onde $\approx$ pode ser lido como *aproximadamente*.$\beta_0$ e $\beta_1$ são duas constantes desconhecidas que representam a intercepção da reta com o eixo vertical ($\beta_0$) e o declive (coeficiente angular) da reta ($\beta_1$). As duas constantes são conhecidas como coeficientes ou parâmetros do modelo. O propósito da regressão linear é utilizar o conjunto de dados conhecidos para estimar os valores destas duas variáveis e definir o modelo aproximado:$\hat{y} = \hat{\beta_0} + \hat{\beta_1}x$,onde $\hat{y}$ indica um valor estimado de $Y$ baseado em $X = x$. Com essa equação podemo prever, neste caso, as vendas de um determinado produto baseado em um gasto específico em propaganda na TV.Mas como podemos estimar estes valores? Estimando ValoresNa prática, $\beta_0$ e $\beta_1$ são desconhecidos. Para que a gente possa fazer as estimativas, devemos conhecer os valores destes atributos. Para isso, vamos utilizar os dados já conhecidos.Considere,$(x_1,y_1), (x_2,y_2), ..., (x_n, y_n)$ $n$ pares de instâncias observadas em um conjunto de dados. O primeiro valor consiste de uma observação de $X$ e o segundo de $Y$. Na base de propagandas, esses dados consistem dos 200 valores vistos anteriormente.O objetivo na construção do modelo de regressão linear é estimar os valores de $\beta_0$ e $\beta_1$ tal que o modelo linear encontrado represente, da melhor foma, os dados disponibilizados. Em outras palavras, queremos encontrar os valores dos coenficientes de forma que a reta resultante seja a mais próxima possível dos dados utilizados. Basicamente, vamos encontrar várias retas e analisar qual delas se aproxima mais dos dados apresentados. Existem várias maneiras de medir essa "proximidade". Uma delas é a RSS (*residual sum of squares*), que é representada pela equação:$\sum_{i=1}^{N}{(\hat{y_i}-y_i)^2}$, onde $\hat{y_i}$ o valor estimado de y e $y_i$, o valor real.A figura a seguir apresenta um exemplo que mostra os valores estimados e a diferença residual.Os pontos vermelhos representam os dados observados; a linha azul, o modelo construído e as linhas cinzas, a diferença residual entre o que foi estimado e o que era real.Vamos estimar tais parâmetros utilizando *scikit-learn*. Aplicação do modelo de regressão linear O primeiro passo é separar dos dados (*features*) das classes (*labels*) dos dados que serão utilizados para treinar nosso modelo.
###Code
# Carregando os dados de treinamento e os labels
feature_cols = ['TV']
X = data[feature_cols] # Dados de Treinamento
y = data.sales # Labels dos dados de Treinamento
###Output
_____no_output_____
###Markdown
Em seguida, vamos instanciar o modelo de Regressão Linear do ScikitLearn e treina-lo com os dados.
###Code
lm = LinearRegression() # Instanciando o modelo
lm.fit(X, y) # Treinando com os dados de treinamento
###Output
/Users/adolfoguimaraes/Desenvolvimento/ludiicos/tensorflow/tensorenv/lib/python3.6/site-packages/scipy/linalg/basic.py:1018: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver.
warnings.warn(mesg, RuntimeWarning)
###Markdown
Como dito anteriormente, o modelo aprendeu, baseado no conjunto de dados, valores para $\beta_0$ e $\beta_1$. Vamos visualizar os valores encontrados.
###Code
#Imprimindo beta_0
print("Valor de Beta_0: " + str(lm.intercept_))
#Imprimindo beta_1
print("Valor de Beta_1: " + str(lm.coef_[0]))
###Output
Valor de Beta_0: 7.03259354913
Valor de Beta_1: 0.047536640433
###Markdown
Esse valores representam os valores de $\beta_0$ e $\beta_1$ da nossa equação que representa um modelo simples de regressão linear onde é levado em consideração somente um atributo. Com esses valores é possível estimar quanto será vendido dado um determinado gasto em propaganda de TV. Além disso, o coeficiente $\beta_1$ nos conta mais sobre o problema. O valor de $0.047536640433$ indica que cada unidade que aumentarmos em propaganda de TV implica em um aumento de $0.047536640433$ nas vendas. Em outras palavras, cada $1,000$ gastos em TV está associado com um aumento de 47.537 de unidades nas vendas. Vamos usar esses valores para estimar quanto será vendido se gastarmos $50000$ em TV.$y = 7.03259354913 + 0.047536640433 \times 50$
###Code
7.03259354913+0.047536640433*50
###Output
_____no_output_____
###Markdown
Desta forma, poderíamos prever a venda de 9409 unidades.No entanto, nosso objetivo não é fazer isso manualmente. A idéia é construir o modelo e utiliza-lo para fazer a estimativa de valores. Para isso, vamos utilizar o método *predict*.Podemos estimar para uma entrada apenas:
###Code
lm.predict([[50]])
###Output
_____no_output_____
###Markdown
Ou várias:
###Code
lm.predict([[50], [200], [10]])
###Output
_____no_output_____
###Markdown
Para entender melhor como a Regressão Linear funciona vamos visualizar no gráfico o modelo construído.
###Code
'''
O código a seguir faz a predição para o menor e maior valores de X na base de treinamento. Estes valores
são utilizados para construir uma reta que é plotada sobre os dados de treinamento.
'''
X_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]}) # Menor e Maior valores de X na base de treinamento
preds = lm.predict(X_new) # Predição destes valores
data.plot(kind='scatter', x='TV', y='sales') # Plotagem dos valores da base de treinamento
plt.plot(X_new, preds, c='red', linewidth=2) # Plotagem da reta
###Output
_____no_output_____
###Markdown
A reta em vermelho representa o modelo de regressão linear construído a partir dos dados passados. Avaliando o Modelo ConstruídoPara avaliar o modelo construído vamos utilizar uma métrica denominada de $R^2$ (*R-squared* ou coeficiente de determinação). (By [Wikipedia](https://pt.wikipedia.org/wiki/R%C2%B2)) *O coeficiente de determinação, também chamado de R², é uma medida de ajustamento de um modelo estatístico linear generalizado, como a Regressão linear, em relação aos valores observados. O R² varia entre 0 e 1, indicando, em percentagem, o quanto o modelo consegue explicar os valores observados. Quanto maior o R², mais explicativo é modelo, melhor ele se ajusta à amostra. Por exemplo, se o R² de um modelo é 0,8234, isto significa que 82,34\% da variável dependente consegue ser explicada pelos regressores presentes no modelo.*Para entender melhor a métrica, vamos analisar o gráfico a seguir:*Fonte da Imagem: https://github.com/justmarkham/DAT4/ *Observe que a função representada pela cor vemelha se ajusta melhor aos dados do que as retas de cor azul e verde. Visualmente podemos ver que, de fato, a curva vemelha descreve melhor a distribuição dos dados plotados.Vamos calcular o valor do *R-squared* para o modelo construído utilizando o método *score* que recebe como parâmetro os dados de treinamento.
###Code
lm.score(X, y)
###Output
_____no_output_____
###Markdown
Sozinho esse valor não nos conta muito. No entanto, ele será bastante útil quando formos comparar este modelo com outros mais à frente. Multiple Linear RegressionPodemos estender o modelo visto anteriormente para trabalhar com mais de um atributo, a chamada *Multiple Linear Regression*. Matematicamente, teríamos:$y \approx \beta_0 + \beta_1 x_1 + ... \beta_n x_n$Cada $x$ representa um atribuito e cada atributo possui seu próprio coeficiente. Para nossa base de dados, teríamos:$y \approx \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$Vamos construir nosso modelo para este caso:
###Code
# Carregando os dados de X e y do dataset
feature_cols = ['TV','radio','newspaper']
X = data[feature_cols]
y = data.sales
#Instanciando e treinando o modelo de regressão linear
lm = LinearRegression()
lm.fit(X, y)
#Imprimindo os coeficientes encontrados
print("Valor de Beta_0: ")
print(str(lm.intercept_))
print()
print("Valores de Beta_1, Beta_2, ..., Beta_n: ")
print(list(zip(feature_cols, lm.coef_)))
###Output
Valor de Beta_0:
2.93888936946
Valores de Beta_1, Beta_2, ..., Beta_n:
[('TV', 0.045764645455397601), ('radio', 0.18853001691820448), ('newspaper', -0.0010374930424762578)]
###Markdown
O modelo construído foi:$y \approx 2.93888936946 + 0.045764645455397601_1 \times TV + 0.18853001691820448 \times Radio -0.0010374930424762578 \times Newspaper$ Assim como fizemos no primeiro exemplo, podemos utilzar o método *predict* para prever valores não conhecidos.
###Code
lm.predict([[100, 25, 25], [200, 10, 10]])
###Output
_____no_output_____
###Markdown
Avaliando o modelo, temos o valor para o $R^2$:
###Code
lm.score(X, y)
###Output
_____no_output_____
###Markdown
Entendendo os resultados obtidosVamos analisar alguns resultados obtidos nos dois modelos construídos anteriormente. A primeira coisa é verificar o valor de $\beta_1$. Esse valor deu positivo para os atributos *TV* e *Radio* e negativo para o atributo *Newspaper*. Isso significa que o gasto em propaganda está relacionado positivamente às vendas nos dois primeiros atributos. Diferente do que acontece com o *Newspaper*: o gasto está negativamente associado às vendas. Uma outra coisa que podemos perceber é que o *R-squared* aumentou quando aumentamos o número de atributos. Isso normalmente acontece com essa métrica. Basicamente, podemos concluir que este último modelo tem um valor mais alto para o *R-squared* que o modelo anterior que considerou apenas a TV como atributo. Isto significa que este modelo fornece um melhor "ajuste" aos dados fornecidos. No entanto, o *R-squared* não é a melhor métrica para avaliar tais modelos. Se fizermos um análise estatística mais aprofundada (essa análise foge do escopo desta disciplina. Detalhes podem ser encontrados [aqui](https://github.com/justmarkham/DAT4/blob/master/notebooks/08_linear_regression.ipynb)) vamos perceber que o atributo *Newspaper* não influencia (estatisticamente) no total de vendas. Teoricamente, poderíamos descartar tal atributo. No entanto, se calcularmos o valor do *R-squared* para um modelo sem *Newspaper* e para o modelo com *Newspaper*, o valor do segundo será maior que o primeiro. ** Essa tarefa fica como atividade ;) ** KNN: k-Nearest Neighbors No início desse tutorial tratamos o problema de aprendizado supervisionado a partir de dois pontos de vista. O primeiro da regressão. Mostramos como trabalhar com a regressão linear para prever valores em um intervalo. O segundo, como o problema de classificação de instâncias em classes. Para exemplificar esse problema, vamos trabalhar com o KNN, uma das técnicas mais simples de classificação. A idéia básica do KNN é que podemos classificar uma instância desconhecida com base nas informações dos vizinhos mais próximos. Para isso, exergamos os dados como pontos marcados em um sistema cartesiano e utilizamos a distância entre pontos para identificar quais estão mais próximoas. Para entender um pouco mais do KNN, assista [este vídeo](https://www.youtube.com/watch?v=UqYde-LULfs)Para começar, vamos analisar o conjunto de dados a seguir:
###Code
data = pd.read_csv("http://www.data2learning.com/datasets/basehomemulher.csv", index_col=0)
data
###Output
_____no_output_____
###Markdown
Os dados representam informações de altura e peso coletadas de homens e mulheres. Se plotarmos tais informações no gráfico, temos:
###Code
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
###Output
_____no_output_____
###Markdown
Considere que com base nestes dados, desejamos classificar uma nova instância. Vamos considerar uma instância onde a altura seja 1.70 e o peso 50. Se plotarmos esse ponto no gráfico, temos (a nova instância está representada pelo x):
###Code
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
plt.plot([50], [1.70], 'x', c='green')
###Output
_____no_output_____
###Markdown
O KNN vai classificar a nova instância com base nos vizinhos mais próximos. Neste caso, a nova instância seria classificada como mulher. Essa comparação é feita com os $k$ vizinhos mais próximos. Por exemplo, se considerarmos 3 vizinhos mais próximos e destes 3, dois são mulheres e 1 homem, a instância seria classificada como mulher já que corresponde a classe da maioria dos vizinhos. A distância entre dois pontos pode ser calculada de diversas formas. A biblioteca do ScikitLearn lista [uma série de métricas de distância](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html) que podem ser usadas. Vamos considerar um novo ponto e simular o que o algoritmo do KNN faz.
###Code
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
plt.plot([77], [1.68], 'x', c='green')
###Output
_____no_output_____
###Markdown
Vamos trabalhar com o ponto **{'altura': 1.68, 'peso':77}** e calcular sua distância para todos os demais pontos. No exemplo vamos usar a distância euclideana: $\sqrt{\sum{(x_1 - x_2)^2 + (y_1 - y_2)^2}}$. Para simplificar, vamos utilizar nossa própria implementação da distância euclideana.
###Code
import math
# Calcula a distância euclideana de dois pontos
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x] - instance2[x]), 2)
return math.sqrt(distance)
# Só para visualização: converte os valores para labels em String
def convert_label(value):
if value == 0.0: return 'Mulher'
else: return 'Homem'
# 0 = mulher , 1 = homem
for index, row in data.iterrows():
print(convert_label(row['classe']), '%0.2f' % euclideanDistance([row['peso'], row['altura']],[77, 1.68], 2))
###Output
Mulher 27.00
Mulher 24.00
Mulher 17.00
Mulher 15.00
Homem 14.00
Homem 25.00
Homem 28.00
Homem 26.00
Homem 10.00
###Markdown
Uma vez que calculamos a distância do novo ponto a todos os demais pontos da base, devemos verificar os $k$ pontos mais próximos e ver qual classe predomina nestes pontos. Considerando os 3 vizinhos mais próximos ($k=3$), temos: * Homem: 10.0* Homem: 14.0* Mulher: 15.0Sendo assim, a instância selecionada seria classificada como **Homem**.E se considerássemos $k=5$?* Homem: 10.0* Homem: 14.0* Mulher: 15.0* Mulher: 17.0* Mulher: 24.0Neste caso, a instância seria classificada como **Mulher**.
###Code
import math
# Calcula a distância euclideana de dois pontos
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x] - instance2[x]), 2)
return math.sqrt(distance)
# Só para visualização: converte os valores para labels em String
def convert_label(value):
if value == 0.0: return 'Mulher'
else: return 'Homem'
# 0 = mulher , 1 = homem
for index, row in data.iterrows():
print(convert_label(row['classe']), '%0.2f' % euclideanDistance([row['peso'], row['altura']],[77, 1.68], 2))
###Output
Mulher 27.00
Mulher 24.00
Mulher 17.00
Mulher 15.00
Homem 14.00
Homem 25.00
Homem 28.00
Homem 26.00
Homem 10.00
###Markdown
Dá para perceber que o valor de $k$ influencia bastante na classificação dos objetos. Mais para frente usaremos a precisão do modelo para determinar o melhor valor de $k$. Quando o valor de $k$ é muito pequeno, o modelo está mais sensível aos pontos de ruído da base. Quando o $k$ é muito grande, a vizinhança pode incluir elementos de outra classe. Vale ressaltar que, normalmente, escolhemos valores de $k$ ímpar para evitar empates. Um caso especial do KNN é quando utilizamos K = 1. Vamos considerar o exemplo da imagem a seguir: ** Dataset de treinamento ** Ao considerarmos K = 1, podemos construir um mapa de classificação, como é mostrado a seguir:** Mapa de classificação para o KNN (K=1) **> *Image Credits: Data3classes, Map1NN, Map5NN by Agor153. Licensed under CC BY-SA 3.0*Uma nova instância será classificada de acordo com a região na qual ela se encontra. Para finalizar, vale ressaltar dois pontos. O primeiro é que em alguns casos faz-se necessário normalizar os valores da base de treinamento por conta da discrepância entre as escalas dos atributos. Por exemplo, podemos ter altura em um intervalo de 1.50 à 1.90, peso no intervalo de 60 à 100 e salário no intervalo de 800 à 1500. Essa diferença de escalas pode fazer com que as medidas das distâncias sejam influenciadas por um único atributo.Um outro pronto é em relação as vantagens e desvantagens desta técnica. A principal vantagem de se usar um KNN é que ele é um modelo simples de ser implementado. No entanto, ele possui um certo custo computacional no cálculo da distância entre os pontos. Um outro problema é que a qualidade da classificação pode ser severamente prejudicada com a presença de ruídos na base. Implementando o KNN com ScikitLearnVamos implementar o KNN utilizando o ScikitLearn e realizar as tarefas de classificação para a base da Iris.
###Code
# Importando o Dataset
from sklearn.datasets import load_iris
data_iris = load_iris()
X = data_iris.data
y = data_iris.target
###Output
_____no_output_____
###Markdown
Ao instanciar o modelo do KNN devemos passar o parâmetro *n_neighbors* que corresponde ao valor $k$, a quantidade de vizinhos próximos que será considerada.
###Code
# Importando e instanciando o modelo do KNN com k = 1
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
###Output
_____no_output_____
###Markdown
Instanciado o modelo, vamos treina-lo com os dados de treinamento.
###Code
knn.fit(X, y)
###Output
_____no_output_____
###Markdown
Assim como fizemos com a regressão linear, podemos utilizar o modelo construído para fazer a predição de dados que ainda não foram analisados.
###Code
# O método predict retorna a classe na qual a instância foi classificada
predict_value = knn.predict([[3, 5, 4, 2],[1,2,3,4]])
print(predict_value)
print(data_iris.target_names[predict_value[0]])
print(data_iris.target_names[predict_value[1]])
###Output
[2 2]
virginica
virginica
###Markdown
Avaliando e escolhendo o melhor modeloPara que possamos escolher o melhor modelo, devemos primeiro avalia-los. A avaliação do modelo de classificação é feita por meio da métrica denominada de **acurácia**. A acurácia corresponde a taxa de acerto do modelo. Um modelo que possui $90\%$ de acurácia acertou a classe em $90\%$ dos casos que foram analisados.Vale ressaltar que a escolha do melhor modelo depende de vários fatores por isso precisamos testar diferentes modelos para diferentes bases com diferentes parâmetros. Isso será melhor abordado mais a frente no nosso curso. Para simplificar, vamos trabahar com dois modelos do KNN para a base Iris. O primeiro com um K=3 e outro com K = 10.
###Code
knn_3 = KNeighborsClassifier(n_neighbors=3)
knn_3.fit(X, y)
knn_10 = KNeighborsClassifier(n_neighbors=10)
knn_10.fit(X, y)
accuracy_3 = knn_3.score(X, y)
accuracy_10 = knn_10.score(X, y)
print('Acurácia com k = 3: ', '%0.4f'% accuracy_3)
print('Acurácia com k = 10: ', '%0.4f'% accuracy_10)
###Output
Acurácia com k = 3: 0.9600
Acurácia com k = 10: 0.9800
|
examples/example2-2_UHFQA.ipynb | ###Markdown
UHFQA Just like the driver for the HDAWG in the previous example, we now use the `tk.UHFQA` instrument driver.
###Code
import numpy as np
import matplotlib.pyplot as plt
import qcodes as qc
import zhinst.qcodes as ziqc
uhfqa = ziqc.UHFQA("qa1", "dev2266", interface="1gbe", host="10.42.0.226")
print([k for k in uhfqa.submodules.keys()])
print([k for k in uhfqa.parameters.keys()])
###Output
['channels', 'awg', 'stats', 'oscs', 'triggers', 'status', 'dios', 'auxins', 'system', 'sigins', 'sigouts', 'features', 'auxouts', 'qas']
['IDN', 'crosstalk_matrix', 'result_source', 'integration_time', 'averaging_mode', 'clockbase']
###Markdown
AWG core of the UHFQAAlso the UHFQA features one *AWG Core*.
###Code
print([k for k in uhfqa.awg.parameters.keys()])
###Output
['outputs', 'output1', 'output2', 'gain1', 'gain2']
###Markdown
Readout channels of the UHFQAThe UHFQA comes with signal processing streams for up to ten channels in parallel. The settings for the readout are grouped by channel in a list of all ten `channels`. Each item in the `channels` property of the UHFQA is an *Instrument Channel* that represent the signal processing path for one of the ten channels.
###Code
print([k for k in uhfqa.channels[0].parameters.keys()])
###Output
['rotation', 'threshold', 'readout_frequency', 'readout_amplitude', 'phase_shift', 'result', 'enabled']
###Markdown
Each of the channels follows the following signal processing steps:1. Demodulation of the input signal2. Rotation in the complex plane3. Thresholding for binary result valuesThe values for the rotation and thresholding stages can be set using the `rotation` and `threshold` parameter of the *channel*. The standard mode for the demodulation of input signals is the *weighted integration* mode. This corresponds to setting the integration weights for the two quadratures of the input signal to oscillate at a given demodulation frequency. When enabling the weighted integration with `ch.enable()`, the integration weights for the two quadratures are set. The demodulation frequency is set to the parameter `readout_frequency`.Enabling weighted integration for the first four channels of the UHFQA and setting their demodulation frequency could look like this:
###Code
freqs = [85.6e6, 101.3e6, 132.8e6]
for ch in uhfqa.channels[:3]:
ch.enable()
ch.readout_frequency(freqs[ch.index])
###Output
_____no_output_____
###Markdown
The resut vector of each channel can be retrieved from the instrument by calling the read-only parameter *result*.
###Code
print(uhfqa.channels[0].result.__doc__)
###Output
Node: qas/0/result/data/0/wave
Description: Returns the result vector of the readout channel.
Type: Numpy array
Properties: Read
Unit: None
Parameter class:
* `name` result
* `label` Result
* `unit` None
* `vals` None
###Markdown
Readout parametersThere are readout parameters taht are not specific to one isngle channel but affect all ten readout channels. These are* the `integration_time`: the time in seconds used for integrating the input signals* the `result_source` lets the user select at which point in the signal processing path the `result` value should be taken* the `averaging_mode` specifies if the hardware averages on the device should be taken in a *sequential* or *cyclic* way * the `crosstalk_matrix` specifies a 10 x 10 matrix that can be calibrated to compensate for crosstalk betweeen the channelsThese three *parameters* are attributes of the UHFQA instrument driver.
###Code
print(uhfqa.integration_time.__doc__)
print(uhfqa.result_source.__doc__)
print(uhfqa.averaging_mode.__doc__)
print(uhfqa.crosstalk_matrix.__doc__)
###Output
The 10x10 crosstalk suppression matrix that multiplies the 10 signal paths. Can be set only partially.
Parameter class:
* `name` crosstalk_matrix
* `label` Crosstalk Matrix
* `unit`
* `vals` None
###Markdown
Other important readout parameters can be accessed through the *nodetree*, for example the * *result length*: the number of points to acquire* *result averages*: the number of hardware averages
###Code
print(uhfqa.qas[0].result.length.__doc__)
print(uhfqa.qas[0].result.averages.__doc__)
###Output
* `Node`: /DEV2266/QAS/0/RESULT/AVERAGES
* `Description`: log2 of the number of averages to perform, i.e. 0 means no averaging, 1 means 2 values are averaged, etc. Maximum setting is 15 meaning 2^15 values are averaged.
* `Properties`: Read, Write, Setting
* `Type`: Integer (64 bit)
* `Unit`: None
Parameter class:
* `name` averages
* `label` averages
* `unit`
* `vals` None
###Markdown
*Arm* the UHFQA readoutThe `arm(...)` method of the UHFQA prepares the device for data acquisition. It enables the *Results Acquisition* and resets the acquired points to zero. This should be done before every measurement. The method also includes a shortcut to setting the values *result length* and *result averages*. They can be specified as keyword arguments. If the keyword arguemnts are not specified, nothing is changed.
###Code
uhfqa.arm(length=1e3, averages=2**5)
###Output
_____no_output_____
###Markdown
UHFQA Just like the driver for the HDAWG in the previous example, we now use the `tk.UHFQA` instrument driver.
###Code
import zhinst.toolkit as tk
uhfqa = tk.UHFQA("qa1", "dev2496", interface="1gbe", host="10.42.0.226")
uhfqa.setup() # set up data server connection
uhfqa.connect_device() # connect device to data server
###Output
Successfully connected to data server at 10.42.0.2268004 api version: 6
Successfully connected to device DEV2496 on interface 1GBE
###Markdown
AWG core of the UHFQAAlso the UHFQA features one *AWG Core*.
###Code
uhfqa.awg
###Output
_____no_output_____
###Markdown
Readout channels of the UHFQAThe UHFQA comes with signal processing streams for up to ten channels in parallel. The settings for the readout are grouped by channel in a list of all ten `channels`. Each item in the `channels` property of the UHFQA is one `ReadoutChannel` object that represent the signal processing path for one of the ten channels.
###Code
uhfqa.channels[0]
###Output
_____no_output_____
###Markdown
Each of the channels follows the following signal processing steps:1. Demodulation of the input signal2. Rotation in the complex plane3. Thresholding for binary result valuesThe values for the rotation and thresholding stages can be set using the `rotation` and `threshold` parameter of the *channel*. The standard mode for the demodulation of input signals is the *weighted integration* mode. This corresponds to setting the integration weights for the two quadratures of the input signal to oscillate at a given demodulation frequency. When enabling the weighted integration with `ch.enable()`, the integration weights for the two quadratures are set. The demodulation frequency is set to the parameter `readout_frequency`.Enabling weighted integration for the first four channels of the UHFQA and setting their demodulation frequency could look like this:
###Code
freqs = [85.6e6, 101.3e6, 132.8e6]
for ch in uhfqa.channels[:3]:
ch.enable()
ch.readout_frequency(freqs[ch.index])
###Output
_____no_output_____
###Markdown
The resut vector of each channel can be retrieved from the instrument by calling the read-only parameter *result*.
###Code
uhfqa.channels[0].result
uhfqa.channels[0].result()
###Output
_____no_output_____
###Markdown
Readout parametersThere are readout parameters that are not specific to one single channel but affect all ten readout channels. These are* the `integration_time`: the time in seconds used for integrating the input signals* the `result_source` lets the user select at which point in the signal processing path the `result` value should be taken* the `averaging_mode` specifies if the hardware averages on the device should be taken in a *sequential* or *cyclic* way * the `crosstalk_matrix` specifies a 10 x 10 matrix that can be calibrated to compensate for crosstalk betweeen the channelsThese three *parameters* are attributes of the UHFQA instrument driver.
###Code
uhfqa.integration_time
uhfqa.result_source
uhfqa.averaging_mode
uhfqa.crosstalk_matrix()
###Output
_____no_output_____
###Markdown
Other important readout parameters can be accessed through the *nodetree*, for example the * *result length*: the number of points to acquire* *result averages*: the number of hardware averages
###Code
uhfqa.nodetree.qa.result.length
uhfqa.nodetree.qa.result.averages
###Output
_____no_output_____
###Markdown
*Arm* the UHFQA readoutThe `arm(...)` method of the UHFQA prepares the device for data acquisition. It enables the *Results Acquisition* and resets the acquired points to zero. This should be done before every measurement. The method also includes a shortcut to setting the values *result length* and *result averages*. They can be specified as keyword arguments. If the keyword arguemnts are not specified, nothing is changed.
###Code
uhfqa.arm(length=1e3, averages=2**5)
###Output
_____no_output_____ |
examples/03_example/.ipynb_checkpoints/03_example_notebook-checkpoint.ipynb | ###Markdown
2D harmonisc oscillator `vs` 2 1D harmonic oscillators
###Code
import phlab
from matplotlib import pyplot as plt
ws=phlab.rixs()
###Output
_____no_output_____
###Markdown
Here we will create few models and compare results
###Code
model1 = ws.model_single_osc(name = 'first mode')
model2 = ws.model_single_osc(name = 'second mode')
model3 = ws.model_double_osc( name= '2d')
###Output
creating model : /Users/lusigeondzian/github/phlab/examples/03_example/first mode
/Users/lusigeondzian/github/phlab/examples/03_example/first mode/_input/
no input found
creating new input
warning: please check new input
number of models : 1
creating model : /Users/lusigeondzian/github/phlab/examples/03_example/second mode
/Users/lusigeondzian/github/phlab/examples/03_example/second mode/_input/
no input found
creating new input
warning: please check new input
number of models : 2
creating model : /Users/lusigeondzian/github/phlab/examples/03_example/2d
no input found
creating new input
warning: please check new input
number of models : 3
###Markdown
- Key input parameters: Model 1 (first mode):
###Code
model1.input['coupling'] = 0.09
model1.input['omega_ph'] = 0.03
model1.input['gamma_ph'] = 0.001
###Output
_____no_output_____
###Markdown
Model 2 (second mode):
###Code
model2.input['coupling'] = 0.1
model2.input['omega_ph'] = 0.08
model2.input['gamma_ph'] = 0.001
###Output
_____no_output_____
###Markdown
Model 3 (2D model):
###Code
model3.input['coupling0'] = 0.09
model3.input['omega_ph0'] = 0.03
model3.input['coupling1'] = 0.1
model3.input['omega_ph1'] = 0.08
model3.input['nm'] = 15
model3.input['gamma'] = 0.105
model3.input['gamma_ph'] = 0.001
model3.color = 'r'
###Output
_____no_output_____
###Markdown
Run all three models :
###Code
for model in [model1,model2,model3]:
model.run()
plt.figure(figsize = (10,5))
vitem=ws.visual(model_list=[model3],exp=[])
plt.plot(model1.x, (model1.y)/max(model1.y+model2.y),
color = 'skyblue',
linewidth = 2,
label = 'model1',alpha = 1)
plt.plot(model2.x, (model2.y)/max(model1.y+model2.y),
color = 'lightpink',
linewidth = 2,
label = 'model2',alpha = 1)
plt.plot(model1.x, (model1.y+model2.y)/max(model1.y+model2.y),
color = 'b',
linewidth = 2,
label = 'model1 + model2')
plt.xlim([-0.1,0.6])
vitem.show(scale = 0)
###Output
no experiment in visual
|
Assign_SVM_Salary_data.ipynb | ###Markdown
Problem Statement: Prepare a classification model using SVM for salary data Data Description:age -- age of a personworkclass -- A work class is a grouping of work education -- Education of an individuals maritalstatus -- Marital status of an individulas occupation -- occupation of an individualsrelationship -- race -- Race of an Individualsex -- Gender of an Individualcapitalgain -- profit received from the sale of an investment capitalloss -- A decrease in the value of a capital assethoursperweek -- number of hours work per week native -- Native of an individualSalary -- salary of an individual
###Code
import pandas as pd
import numpy as np
from sklearn.svm import SVC
from sklearn.metrics import classification_report
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
from sklearn.preprocessing import LabelEncoder
# Importing the train dataset
salary_train = pd.read_csv("G:/data sceince/Assignments/SVM/SalaryData_Train(1).csv")
salary_train.head()
# Importing the test dataset
salary_test = pd.read_csv("G:/data sceince/Assignments/SVM/SalaryData_Test(1).csv")
salary_test.head()
salary_train.info()
salary_test.info()
salary_train.shape
salary_train.tail()
#Converting the Y variable into labels
label_encoder = LabelEncoder()
salary_test['Salary'] = label_encoder.fit_transform(salary_test.Salary)
salary_train['Salary'] = label_encoder.fit_transform(salary_train.Salary)
# converting the categorical columns into dummy variables
salary_train = pd.get_dummies(salary_train)
salary_test = pd.get_dummies(salary_test)
salary_train.dtypes
# assigning the training data to x_train and y_train and test data to x_test and y_test
x_train = salary_train.drop('Salary', axis = 1)
y_train = salary_train['Salary']
x_test = salary_test.drop('Salary', axis = 1)
y_test = salary_test['Salary']
svm = SVC(kernel = 'linear', gamma = 0.1, C = 1)
svm.fit(x_train,y_train)
preds = svm.predict(x_test)
preds
accuracy = metrics.accuracy_score(preds,y_test)
accuracy
###Output
_____no_output_____ |
module4/4_medinadiego_assignment_regression_classification_4.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Assignment- [ ] Watch Aaron's [video 1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video 2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your validation accuracy score.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.--- Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values)- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).--- Data Dictionary FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational--- Generate a submissionYour code to generate a submission file may look like this:```python estimator is your model or pipeline, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)```If you're working locally, the csv file is saved in the same directory as your notebook.If you're using Google Colab, you can use this code to download your submission csv file.```pythonfrom google.colab import filesfiles.download('your-submission-filename.csv')```---
###Code
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module4')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read the Tanzania Waterpumps data
# train_features.csv : the training set features
# train_labels.csv : the training set labels
# test_features.csv : the test set features
# sample_submission.csv : a sample submission file in the correct format
import pandas as pd
train_features = pd.read_csv('../data/waterpumps/train_features.csv')
train_labels = pd.read_csv('../data/waterpumps/train_labels.csv')
test_features = pd.read_csv('../data/waterpumps/test_features.csv')
sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
train_features.head()
train_labels.head() # undo later
mode_map = {'functional': 0, 'non functional': 1, 'functional needs repair': 2}
train_labels['status_group'] = train_labels['status_group'].replace(mode_map)
# train_labels.head()
# functional: 0
# non functional: 1
# functional needs repair: 2
from sklearn.model_selection import train_test_split
x_training, x_validation = train_test_split(train_features, random_state=10)
# train_labels.status_group.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Adding y_variable to training and validation
###Code
# splitting train_labels to add them to x_training and x_validation!
training_y_labels, validation_y_labels = train_test_split(train_labels, random_state=10)
x_training = x_training.merge(training_y_labels, on='id')
x_validation = x_validation.merge(validation_y_labels, on='id')
x_training.head()
# both have the same shape as their corresponding sets
print(training_y_labels.shape)
print(validation_y_labels.shape)
print(x_training.shape)
print(x_validation.shape)
###Output
(44550, 41)
(14850, 41)
###Markdown
Baselines
###Code
y_train = x_training['status_group']
y_train.mode()
from sklearn.metrics import mean_absolute_error
# baseline
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train) # both have to be in Series
mae = mean_absolute_error(y_train, y_pred) # for it to work
print('MAE. Not accuracy metric: ' + str(mae))
x_validation.head()
from sklearn.metrics import accuracy_score
acurracy_s = accuracy_score(y_train, y_pred)
print('Training accuracy score: ', str(acurracy_s))
# how much does it differ from validation dataset?
y_val = x_validation['status_group']
majority_class_2 = y_val.mode()
y_predict = [majority_class_2] * len(y_val)
ac_v = accuracy_score(y_val, y_predict)
print('Validation accuracy score: ', str(ac_v))
###Output
Training accuracy score: 0.5410998877665545
Validation accuracy score: 0.549023569023569
###Markdown
Linear Reg
###Code
features = ['id', 'construction_year', 'longitude', 'latitude']
x_training[features].isnull().sum()
# 0 null values, no need to use imputer for lin reg?
from sklearn.linear_model import LinearRegression
model_linear = LinearRegression()
# not using imputer, encoder here
# reminder!
# y_train = x_training['status_group']
# y_val = x_validation['status_group']
features = ['id', 'construction_year', 'longitude', 'latitude']
x_train = x_training[features] # these for now
x_val = x_validation[features]
# In Lecture, training feature was used (population)
# y_train = y_training['status_group']
# y_validation = y_validation['status_group']
model_linear.fit(x_train, y_train) # not train_labels
model_linear.predict(x_val)
model_linear.coef_
###Output
_____no_output_____
###Markdown
Logistic, Imputing, and encoding
###Code
x_training.describe(include=['O']).T
# reducing training column cardinality
date_recorded_top = x_training['date_recorded'].value_counts()[:50].index
x_training.loc[~x_training['date_recorded'].isin(date_recorded_top), 'date_recorded'] = 'N/A'
funder_top = x_training['funder'].value_counts()[:50].index
x_training.loc[~x_training['funder'].isin(funder_top), 'funder'] = 'N/A'
ward_top = x_training['ward'].value_counts()[:50].index
x_training.loc[~x_training['ward'].isin(ward_top), 'ward'] = 'N/A'
installer_top = x_training['installer'].value_counts()[:50].index
x_training.loc[~x_training['installer'].isin(installer_top), 'installer'] = 'N/A'
scheme_name_top = x_training['scheme_name'].value_counts()[:50].index
x_training.loc[~x_training['scheme_name'].isin(scheme_name_top), 'scheme_name'] = 'N/A'
# reducing validation column cardinality
funder_top_v = x_validation['funder'].value_counts()[:50].index
x_validation.loc[~x_validation['funder'].isin(funder_top_v), 'funder'] = 'N/A'
funder_top_v = x_validation['funder'].value_counts()[:50].index
x_validation.loc[~x_validation['funder'].isin(funder_top_v), 'funder'] = 'N/A'
ward_top_v = x_validation['ward'].value_counts()[:50].index
x_validation.loc[~x_validation['ward'].isin(ward_top), 'ward'] = 'N/A'
installer_top_v = x_validation['installer'].value_counts()[:50].index
x_validation.loc[~x_validation['installer'].isin(installer_top_v), 'installer'] = 'N/A'
scheme_name_top_v = x_validation['scheme_name'].value_counts()[:50].index
x_validation.loc[~x_validation['scheme_name'].isin(scheme_name_top_v), 'scheme_name'] = 'N/A'
# dropping those with extremely high cardinality
to_drop = ['wpt_name', 'subvillage']
x_training = x_training.drop(to_drop, axis=1)
x_validation = x_validation.drop(to_drop, axis=1)
# tried to write a function..
# def changing_cardinality(column, number_of_card, placeholder):
# filtered = df[column].value_counts()[:number_of_card].index
# changed = [df.loc[~df[column].isin(filtered), column] == placeholder]
# return changed
# x_training['date_recorded'] = x_training['date_recorded'].apply(changing_cardinality('date_recorded', 50, 'N/A'))
x_training.describe(include=['O'])
# all_columns = x_training.describe(include=['O'])
# total_col_list = list(all_columns.columns)
total_col_list = ['date_recorded', 'funder', 'installer', 'region', 'ward', 'recorded_by', 'scheme_management','scheme_name', 'water_quality', 'quality_group', 'quantity',
'quantity_group', 'source', 'source_type', 'source_class', 'waterpoint_type', 'waterpoint_type_group']
# numerics
numerics = ['longitude', 'latitude', 'region_code', 'district_code', 'population', 'construction_year']
total_col_list.extend(numerics)
# total_col_list
from sklearn.linear_model import LogisticRegression
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
# model_log = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model_log = LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=300)
imputer = SimpleImputer()
encoder = ce.OneHotEncoder(use_cat_names=True)
scaler = StandardScaler()
# reminder!
# y_train = x_training['status_group']
# y_val = x_validation['status_group']
features = total_col_list
x_train = x_training[features]
x_val = x_validation[features]
x_train_encoded = encoder.fit_transform(x_train)
x_train_imputed = imputer.fit_transform(x_train_encoded)
x_train_scaled = scaler.fit_transform(x_train_imputed)
x_val_encoded = encoder.transform(x_val)
x_val_imputed = imputer.transform(x_val_encoded)
x_val_scaled = scaler.transform(x_val_imputed)
model_log.fit(x_train_scaled, y_train)
print('Validation accuracy score', model_log.score(x_val_scaled, y_val))
X_test = test_features[features]
X_test_encoded = encoder.transform(X_test)
X_test_imputed = imputer.transform(X_test_encoded)
X_test_scaled = scaler.transform(X_test_imputed)
y_pred = model_log.predict(X_test_scaled)
print(y_pred)
submission = sample_submission.copy()
submission['status_group'] = y_pred
mode_map = {0: 'functional', 1: 'non functional', 2: 'functional needs repair'}
submission['status_group'] = submission['status_group'].replace(mode_map)
submission.to_csv('medinadiegokaggle.csv', index=False)
from google.colab import files
files.download('medinadiegokaggle.csv')
###Output
_____no_output_____ |
FbProphet_2.ipynb | ###Markdown
Facebook ProphetFbProphet is a robust library for time series data analysis and forecast, developed by facebook's core data science team. It is based on Generalized Additive Model(GAM).There are 3 major component of it:- Trend Component {g(t)}- Seasonal Component {s(t)}- holiday component {h(t)} y(t)=g(t)+s(t)+h(t)+E, E=error caused by unsual changes not accomodated by model
###Code
import pandas as pd
from fbprophet import Prophet
data=pd.read_csv("data.csv")
data.head()
data=data[['Date','close']]
data.columns=['ds','y']
data.ds=pd.to_datetime(data.ds)
data.head()
data.shape
data.isna().sum()
import matplotlib.pyplot as plt
plt.figure(figsize=(12,8))
plt.plot(data.set_index(['ds']))
###Output
_____no_output_____
###Markdown
Smoothing the curve by resampling it in week frequency.
###Code
data.set_index(['ds'],inplace=True)
data.y=data.y.resample("W").mean()
data.dropna(inplace=True)
data.head(10)
plt.figure(figsize=(12,8))
plt.plot(data)
###Output
_____no_output_____
###Markdown
Columns names should be 'ds' and 'y' to be prophet training compatible. Here, ds represent datestamp and y represents training value
###Code
data['ds']=data.index
data.head()
#Prophet model
model=Prophet(n_changepoints=35,
yearly_seasonality=False,
weekly_seasonality=False,
daily_seasonality=False,
changepoint_prior_scale=0.4).add_seasonality(
name='yearly',
period=365.25,
fourier_order=10)
model.fit(data)
###Output
_____no_output_____
###Markdown
Inference
###Code
future=model.make_future_dataframe(periods=60,freq="W")
forecast=model.predict(future)
fig=model.plot(forecast) #The model seems to fit well with our data
###Output
_____no_output_____
###Markdown
Graphs below give good insights of our data.
###Code
fig2=model.plot_components(forecast)
###Output
_____no_output_____
###Markdown
In cells below, lets divide our data into training and test set, and see how our model performs on test data.
###Code
data.shape
data_train=data.iloc[:300].copy()
data_train.tail()
model2=Prophet(n_changepoints=35,
yearly_seasonality=False,
weekly_seasonality=False,
daily_seasonality=False,
changepoint_prior_scale=0.4).add_seasonality(
name='yearly',
period=365.25,
fourier_order=10)
model2.fit(data_train)
future=model2.make_future_dataframe(periods=79,freq="W") #test count=79
forecast=model.predict(future)
fig3=model2.plot(forecast) ##The model worked well for test data, graphs look similar to the original one.
###Output
_____no_output_____ |
dl1/.ipynb_checkpoints/lesson6-rnn-checkpoint.ipynb | ###Markdown
Setup准备数据 We're going to download the collected works of Nietzsche to use as our data for this class.
###Code
PATH='data/nietzsche/'
get_data("https://s3.amazonaws.com/text-datasets/nietzsche.txt", f'{PATH}nietzsche.txt')
text = open(f'{PATH}nietzsche.txt').read()
print('corpus length:', len(text))
text[:400]
# 统计字符数目
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
###Output
total chars: 85
###Markdown
Sometimes it's useful to have a zero value in the dataset, e.g. for padding
###Code
# 增加零值用于数据集中的padding。
chars.insert(0, "\0")
''.join(chars[1:-6])
###Output
_____no_output_____
###Markdown
Map from chars to indices and back again
###Code
# 构建一张字符->下标和下标->字符的表
char_indices = {c: i for i, c in enumerate(chars)}
indices_char = {i: c for i, c in enumerate(chars)}
###Output
_____no_output_____
###Markdown
*idx* will be the data we use from now on - it simply converts all the characters to their index (based on the mapping above)
###Code
# 将文本中所有字符都转换为下标
idx = [char_indices[c] for c in text]
idx[:10]
# 测试
''.join(indices_char[i] for i in idx[:70])
###Output
_____no_output_____
###Markdown
Three char model3字符模型 Create inputs Create a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters
###Code
# 构建列表,分别用来存储第0个,第1个,第2个和第3个字符
cs=3
c1_dat = [idx[i] for i in range(0, len(idx)-cs, cs)]
c2_dat = [idx[i+1] for i in range(0, len(idx)-cs, cs)]
c3_dat = [idx[i+2] for i in range(0, len(idx)-cs, cs)]
c4_dat = [idx[i+3] for i in range(0, len(idx)-cs, cs)]
###Output
_____no_output_____
###Markdown
Our inputs
###Code
# 前三个字符作为输入
x1 = np.stack(c1_dat)
x2 = np.stack(c2_dat)
x3 = np.stack(c3_dat)
###Output
_____no_output_____
###Markdown
Our output
###Code
# 第四个字符作为输出
y = np.stack(c4_dat)
###Output
_____no_output_____
###Markdown
The first 4 inputs and outputs
###Code
x1[:4], x2[:4], x3[:4]
y[:4]
# 确保维度对应上
x1.shape, y.shape
###Output
_____no_output_____
###Markdown
Create and train model Pick a size for our hidden state
###Code
# 选择隐态的大小
n_hidden = 256
###Output
_____no_output_____
###Markdown
The number of latent factors to create (i.e. the size of the embedding matrix)
###Code
# 选择latent factors的数目
# 比如embedding matrix的大小
n_fac = 42
# 构建一个针对三字符预测的模型
class Char3Model(nn.Module):
def __init__(self, vocab_size, n_fac):
super().__init__()
# Embedding层
self.e = nn.Embedding(vocab_size, n_fac)
# The 'green arrow' from our diagram - the layer operation from input to hidden
# Linear层:输入->隐藏层
self.l_in = nn.Linear(n_fac, n_hidden)
# The 'orange arrow' from our diagram - the layer operation from hidden to hidden
# Linear层:隐藏层->隐藏层
self.l_hidden = nn.Linear(n_hidden, n_hidden)
# The 'blue arrow' from our diagram - the layer operation from hidden to output
# Linear层:隐藏层->隐藏层
self.l_out = nn.Linear(n_hidden, vocab_size)
def forward(self, c1, c2, c3):
# 详情看RNN的资料
# 根据输入构建嵌套矩阵-》输入线性层-》Relu
in1 = F.relu(self.l_in(self.e(c1)))
in2 = F.relu(self.l_in(self.e(c2)))
in3 = F.relu(self.l_in(self.e(c3)))
# 构建初始量-》循环增加-》隐藏层-》tanh增加对比
h = V(torch.zeros(in1.size()).cuda())
h = F.tanh(self.l_hidden(h+in1))
h = F.tanh(self.l_hidden(h+in2))
h = F.tanh(self.l_hidden(h+in3))
# 得到Softmax结果
return F.log_softmax(self.l_out(h))
# 用fastai自带ColumnarModelData构建数据集dataset
md = ColumnarModelData.from_arrays('.', [-1], np.stack([x1,x2,x3], axis=1), y, bs=512)
m = Char3Model(vocab_size, n_fac).cuda()
it = iter(md.trn_dl)
*xs,yt = next(it)
t = m(*V(xs))
opt = optim.Adam(m.parameters(), 1e-2)
fit(m, md, 1, opt, F.nll_loss)
set_lrs(opt, 0.001)
fit(m, md, 1, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
Test model
###Code
def get_next(inp):
idxs = T(np.array([char_indices[c] for c in inp]))
p = m(*VV(idxs))
i = np.argmax(to_np(p))
return chars[i]
get_next('y. ')
get_next('ppl')
get_next(' th')
get_next('and')
###Output
_____no_output_____
###Markdown
Our first RNN! Create inputs This is the size of our unrolled RNN.
###Code
cs=8
###Output
_____no_output_____
###Markdown
For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to our model.
###Code
c_in_dat = [[idx[i+j] for i in range(cs)] for j in range(len(idx)-cs)]
###Output
_____no_output_____
###Markdown
Then create a list of the next character in each of these series. This will be the labels for our model.
###Code
c_out_dat = [idx[j+cs] for j in range(len(idx)-cs)]
xs = np.stack(c_in_dat, axis=0)
xs.shape
y = np.stack(c_out_dat)
###Output
_____no_output_____
###Markdown
So each column below is one series of 8 characters from the text.
###Code
xs[:cs,:cs]
###Output
_____no_output_____
###Markdown
...and this is the next character after each sequence.
###Code
y[:cs]
###Output
_____no_output_____
###Markdown
Create and train model
###Code
val_idx = get_cv_idxs(len(idx)-cs-1)
md = ColumnarModelData.from_arrays('.', val_idx, xs, y, bs=512)
class CharLoopModel(nn.Module):
# This is an RNN!
def __init__(self, vocab_size, n_fac):
super().__init__()
self.e = nn.Embedding(vocab_size, n_fac)
self.l_in = nn.Linear(n_fac, n_hidden)
self.l_hidden = nn.Linear(n_hidden, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
def forward(self, *cs):
bs = cs[0].size(0)
h = V(torch.zeros(bs, n_hidden).cuda())
for c in cs:
inp = F.relu(self.l_in(self.e(c)))
h = F.tanh(self.l_hidden(h+inp))
return F.log_softmax(self.l_out(h), dim=-1)
m = CharLoopModel(vocab_size, n_fac).cuda()
opt = optim.Adam(m.parameters(), 1e-2)
fit(m, md, 1, opt, F.nll_loss)
set_lrs(opt, 0.001)
fit(m, md, 1, opt, F.nll_loss)
class CharLoopConcatModel(nn.Module):
def __init__(self, vocab_size, n_fac):
super().__init__()
self.e = nn.Embedding(vocab_size, n_fac)
self.l_in = nn.Linear(n_fac+n_hidden, n_hidden)
self.l_hidden = nn.Linear(n_hidden, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
def forward(self, *cs):
bs = cs[0].size(0)
h = V(torch.zeros(bs, n_hidden).cuda())
for c in cs:
inp = torch.cat((h, self.e(c)), 1)
inp = F.relu(self.l_in(inp))
h = F.tanh(self.l_hidden(inp))
return F.log_softmax(self.l_out(h), dim=-1)
m = CharLoopConcatModel(vocab_size, n_fac).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
it = iter(md.trn_dl)
*xs,yt = next(it)
t = m(*V(xs))
fit(m, md, 1, opt, F.nll_loss)
set_lrs(opt, 1e-4)
fit(m, md, 1, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
Test model
###Code
def get_next(inp):
idxs = T(np.array([char_indices[c] for c in inp]))
p = m(*VV(idxs))
i = np.argmax(to_np(p))
return chars[i]
get_next('for thos')
get_next('part of ')
get_next('queens a')
###Output
_____no_output_____
###Markdown
RNN with pytorch
###Code
class CharRnn(nn.Module):
def __init__(self, vocab_size, n_fac):
super().__init__()
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.RNN(n_fac, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
def forward(self, *cs):
bs = cs[0].size(0)
h = V(torch.zeros(1, bs, n_hidden))
inp = self.e(torch.stack(cs))
outp,h = self.rnn(inp, h)
return F.log_softmax(self.l_out(outp[-1]), dim=-1)
m = CharRnn(vocab_size, n_fac).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
it = iter(md.trn_dl)
*xs,yt = next(it)
t = m.e(V(torch.stack(xs)))
t.size()
ht = V(torch.zeros(1, 512,n_hidden))
outp, hn = m.rnn(t, ht)
outp.size(), hn.size()
t = m(*V(xs)); t.size()
fit(m, md, 4, opt, F.nll_loss)
set_lrs(opt, 1e-4)
fit(m, md, 2, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
Test model
###Code
def get_next(inp):
idxs = T(np.array([char_indices[c] for c in inp]))
p = m(*VV(idxs))
i = np.argmax(to_np(p))
return chars[i]
get_next('for thos')
def get_next_n(inp, n):
res = inp
for i in range(n):
c = get_next(inp)
res += c
inp = inp[1:]+c
return res
get_next_n('for thos', 40)
###Output
_____no_output_____
###Markdown
Multi-output model Setup Let's take non-overlapping sets of characters this time
###Code
c_in_dat = [[idx[i+j] for i in range(cs)] for j in range(0, len(idx)-cs-1, cs)]
###Output
_____no_output_____
###Markdown
Then create the exact same thing, offset by 1, as our labels
###Code
c_out_dat = [[idx[i+j] for i in range(cs)] for j in range(1, len(idx)-cs, cs)]
xs = np.stack(c_in_dat)
xs.shape
ys = np.stack(c_out_dat)
ys.shape
xs[:cs,:cs]
ys[:cs,:cs]
###Output
_____no_output_____
###Markdown
Create and train model
###Code
val_idx = get_cv_idxs(len(xs)-cs-1)
md = ColumnarModelData.from_arrays('.', val_idx, xs, ys, bs=512)
class CharSeqRnn(nn.Module):
def __init__(self, vocab_size, n_fac):
super().__init__()
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.RNN(n_fac, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
def forward(self, *cs):
bs = cs[0].size(0)
h = V(torch.zeros(1, bs, n_hidden))
inp = self.e(torch.stack(cs))
outp,h = self.rnn(inp, h)
return F.log_softmax(self.l_out(outp), dim=-1)
m = CharSeqRnn(vocab_size, n_fac).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
it = iter(md.trn_dl)
*xst,yt = next(it)
def nll_loss_seq(inp, targ):
sl,bs,nh = inp.size()
targ = targ.transpose(0,1).contiguous().view(-1)
return F.nll_loss(inp.view(-1,nh), targ)
fit(m, md, 4, opt, nll_loss_seq)
set_lrs(opt, 1e-4)
fit(m, md, 1, opt, nll_loss_seq)
###Output
_____no_output_____
###Markdown
Identity init!
###Code
m = CharSeqRnn(vocab_size, n_fac).cuda()
opt = optim.Adam(m.parameters(), 1e-2)
m.rnn.weight_hh_l0.data.copy_(torch.eye(n_hidden))
fit(m, md, 4, opt, nll_loss_seq)
set_lrs(opt, 1e-3)
fit(m, md, 4, opt, nll_loss_seq)
###Output
_____no_output_____
###Markdown
Stateful model Setup
###Code
from torchtext import vocab, data
from fastai.nlp import *
from fastai.lm_rnn import *
PATH='data/nietzsche/'
TRN_PATH = 'trn/'
VAL_PATH = 'val/'
TRN = f'{PATH}{TRN_PATH}'
VAL = f'{PATH}{VAL_PATH}'
# Note: The student needs to practice her shell skills and prepare her own dataset before proceeding:
# - trn/trn.txt (first 80% of nietzsche.txt)
# - val/val.txt (last 20% of nietzsche.txt)
%ls {PATH}
%ls {PATH}trn
TEXT = data.Field(lower=True, tokenize=list)
bs=64; bptt=8; n_fac=42; n_hidden=256
FILES = dict(train=TRN_PATH, validation=VAL_PATH, test=VAL_PATH)
md = LanguageModelData.from_text_files(PATH, TEXT, **FILES, bs=bs, bptt=bptt, min_freq=3)
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
###Output
_____no_output_____
###Markdown
RNN
###Code
class CharSeqStatefulRnn(nn.Module):
def __init__(self, vocab_size, n_fac, bs):
self.vocab_size = vocab_size
super().__init__()
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.RNN(n_fac, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
self.init_hidden(bs)
def forward(self, cs):
bs = cs[0].size(0)
if self.h.size(1) != bs: self.init_hidden(bs)
outp,h = self.rnn(self.e(cs), self.h)
self.h = repackage_var(h)
return F.log_softmax(self.l_out(outp), dim=-1).view(-1, self.vocab_size)
def init_hidden(self, bs): self.h = V(torch.zeros(1, bs, n_hidden))
m = CharSeqStatefulRnn(md.nt, n_fac, 512).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
fit(m, md, 4, opt, F.nll_loss)
set_lrs(opt, 1e-4)
fit(m, md, 4, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
RNN loop
###Code
# From the pytorch source
def RNNCell(input, hidden, w_ih, w_hh, b_ih, b_hh):
return F.tanh(F.linear(input, w_ih, b_ih) + F.linear(hidden, w_hh, b_hh))
class CharSeqStatefulRnn2(nn.Module):
def __init__(self, vocab_size, n_fac, bs):
super().__init__()
self.vocab_size = vocab_size
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.RNNCell(n_fac, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
self.init_hidden(bs)
def forward(self, cs):
bs = cs[0].size(0)
if self.h.size(1) != bs: self.init_hidden(bs)
outp = []
o = self.h
for c in cs:
o = self.rnn(self.e(c), o)
outp.append(o)
outp = self.l_out(torch.stack(outp))
self.h = repackage_var(o)
return F.log_softmax(outp, dim=-1).view(-1, self.vocab_size)
def init_hidden(self, bs): self.h = V(torch.zeros(1, bs, n_hidden))
m = CharSeqStatefulRnn2(md.nt, n_fac, 512).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
fit(m, md, 4, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
GRU
###Code
class CharSeqStatefulGRU(nn.Module):
def __init__(self, vocab_size, n_fac, bs):
super().__init__()
self.vocab_size = vocab_size
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.GRU(n_fac, n_hidden)
self.l_out = nn.Linear(n_hidden, vocab_size)
self.init_hidden(bs)
def forward(self, cs):
bs = cs[0].size(0)
if self.h.size(1) != bs: self.init_hidden(bs)
outp,h = self.rnn(self.e(cs), self.h)
self.h = repackage_var(h)
return F.log_softmax(self.l_out(outp), dim=-1).view(-1, self.vocab_size)
def init_hidden(self, bs): self.h = V(torch.zeros(1, bs, n_hidden))
# From the pytorch source code - for reference
def GRUCell(input, hidden, w_ih, w_hh, b_ih, b_hh):
gi = F.linear(input, w_ih, b_ih)
gh = F.linear(hidden, w_hh, b_hh)
i_r, i_i, i_n = gi.chunk(3, 1)
h_r, h_i, h_n = gh.chunk(3, 1)
resetgate = F.sigmoid(i_r + h_r)
inputgate = F.sigmoid(i_i + h_i)
newgate = F.tanh(i_n + resetgate * h_n)
return newgate + inputgate * (hidden - newgate)
m = CharSeqStatefulGRU(md.nt, n_fac, 512).cuda()
opt = optim.Adam(m.parameters(), 1e-3)
fit(m, md, 6, opt, F.nll_loss)
set_lrs(opt, 1e-4)
fit(m, md, 3, opt, F.nll_loss)
###Output
_____no_output_____
###Markdown
Putting it all together: LSTM
###Code
from fastai import sgdr
n_hidden=512
class CharSeqStatefulLSTM(nn.Module):
def __init__(self, vocab_size, n_fac, bs, nl):
super().__init__()
self.vocab_size,self.nl = vocab_size,nl
self.e = nn.Embedding(vocab_size, n_fac)
self.rnn = nn.LSTM(n_fac, n_hidden, nl, dropout=0.5)
self.l_out = nn.Linear(n_hidden, vocab_size)
self.init_hidden(bs)
def forward(self, cs):
bs = cs[0].size(0)
if self.h[0].size(1) != bs: self.init_hidden(bs)
outp,h = self.rnn(self.e(cs), self.h)
self.h = repackage_var(h)
return F.log_softmax(self.l_out(outp), dim=-1).view(-1, self.vocab_size)
def init_hidden(self, bs):
self.h = (V(torch.zeros(self.nl, bs, n_hidden)),
V(torch.zeros(self.nl, bs, n_hidden)))
m = CharSeqStatefulLSTM(md.nt, n_fac, 512, 2).cuda()
lo = LayerOptimizer(optim.Adam, m, 1e-2, 1e-5)
os.makedirs(f'{PATH}models', exist_ok=True)
fit(m, md, 2, lo.opt, F.nll_loss)
on_end = lambda sched, cycle: save_model(m, f'{PATH}models/cyc_{cycle}')
cb = [CosAnneal(lo, len(md.trn_dl), cycle_mult=2, on_cycle_end=on_end)]
fit(m, md, 2**4-1, lo.opt, F.nll_loss, callbacks=cb)
on_end = lambda sched, cycle: save_model(m, f'{PATH}models/cyc_{cycle}')
cb = [CosAnneal(lo, len(md.trn_dl), cycle_mult=2, on_cycle_end=on_end)]
fit(m, md, 2**6-1, lo.opt, F.nll_loss, callbacks=cb)
###Output
_____no_output_____
###Markdown
Test
###Code
def get_next(inp):
idxs = TEXT.numericalize(inp)
p = m(VV(idxs.transpose(0,1)))
r = torch.multinomial(p[-1].exp(), 1)
return TEXT.vocab.itos[to_np(r)[0]]
get_next('for thos')
def get_next_n(inp, n):
res = inp
for i in range(n):
c = get_next(inp)
res += c
inp = inp[1:]+c
return res
print(get_next_n('for thos', 400))
###Output
for those the skemps), or
imaginates, though they deceives. it should so each ourselvess and new
present, step absolutely for the
science." the contradity and
measuring,
the whole!
293. perhaps, that every life a values of blood
of
intercourse when it senses there is unscrupulus, his very rights, and still impulse, love?
just after that thereby how made with the way anything, and set for harmless philos
|
section_5/train_output.ipynb | ###Markdown
出力層の学習出力層のパラメータを更新する仕組みをニューラルネットワークに導入し、実際にパラメータが更新されることを確認します。 ● ニューラルネットワークの復習前回構築したニューラルネットワークを復習します。 以下のコードで実装されるニューラルネットワークは、中間層に2つ、出力層に2つのニューロンを持ちます。 SetosaとVersicolorにIrisの品種を分類しますが、パラメータが調整されていないため、まだ適切に品種を分類することができません。
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
iris = datasets.load_iris()
iris_data = iris.data
sl_data = iris_data[:100, 0] # SetosaとVersicolor、Sepal length
sw_data = iris_data[:100, 1] # SetosaとVersicolor、Sepal width
# 平均値を0に
sl_ave = np.average(sl_data) # 平均値
sl_data -= sl_ave # 平均値を引く
sw_ave = np.average(sw_data)
sw_data -= sw_ave
# 入力をリストに格納
input_data = []
for i in range(100): # iには0から99までが入る
input_data.append([sl_data[i], sw_data[i]])
# シグモイド関数
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
# ニューロン
class Neuron:
def __init__(self): # 初期設定
self.input_sum = 0.0
self.output = 0.0
def set_input(self, inp):
self.input_sum += inp
def get_output(self):
self.output = sigmoid(self.input_sum)
return self.output
def reset(self):
self.input_sum = 0
self.output = 0
# ニューラルネットワーク
class NeuralNetwork:
def __init__(self): # 初期設定
# 重み
self.w_im = [[4.0, 4.0], [4.0, 4.0]] # 入力:2 ニューロン数:2
self.w_mo = [[1.0, -1.0]] # 入力:2 ニューロン数:1
# バイアス
self.b_m = [2.0, -2.0] # ニューロン数:2
self.b_o = [-0.5] # ニューロン数:1
# 各層の宣言
self.input_layer = [0.0, 0.0]
self.middle_layer = [Neuron(), Neuron()]
self.output_layer = [Neuron()]
def commit(self, input_data): # 実行
# 各層のリセット
self.input_layer[0] = input_data[0] # 入力層は値を受け取るのみ
self.input_layer[1] = input_data[1]
self.middle_layer[0].reset()
self.middle_layer[1].reset()
self.output_layer[0].reset()
# 入力層→中間層
self.middle_layer[0].set_input(self.input_layer[0] * self.w_im[0][0])
self.middle_layer[0].set_input(self.input_layer[1] * self.w_im[0][1])
self.middle_layer[0].set_input(self.b_m[0])
self.middle_layer[1].set_input(self.input_layer[0] * self.w_im[1][0])
self.middle_layer[1].set_input(self.input_layer[1] * self.w_im[1][1])
self.middle_layer[1].set_input(self.b_m[1])
# 中間層→出力層
self.output_layer[0].set_input(self.middle_layer[0].get_output() * self.w_mo[0][0])
self.output_layer[0].set_input(self.middle_layer[1].get_output() * self.w_mo[0][1])
self.output_layer[0].set_input(self.b_o[0])
return self.output_layer[0].get_output()
# ニューラルネットワークのインスタンス
neural_network = NeuralNetwork()
# 実行
st_predicted = [[], []] # Setosa
vc_predicted = [[], []] # Versicolor
for data in input_data:
if neural_network.commit(data) < 0.5:
st_predicted[0].append(data[0]+sl_ave)
st_predicted[1].append(data[1]+sw_ave)
else:
vc_predicted[0].append(data[0]+sl_ave)
vc_predicted[1].append(data[1]+sw_ave)
# 分類結果をグラフ表示
plt.scatter(st_predicted[0], st_predicted[1], label="Setosa")
plt.scatter(vc_predicted[0], vc_predicted[1], label="Versicolor")
plt.legend()
plt.xlabel("Sepal length (cm)")
plt.ylabel("Sepal width (cm)")
plt.title("Predicted")
plt.show()
###Output
_____no_output_____
###Markdown
● 出力層のパラメータを更新出力層の重みとバイアスを更新します。 最初に、修正量のベース$\delta_o$を求めます。 **$\delta_o$ = (出力 - 正解) × 活性化関数の微分形** 活性化関数がシグモイド関数の場合、上記は以下の形になります。 **$\delta_o$ = (出力 - 正解) × 出力 × (1 - 出力)** そして、この$\delta_o$を使って出力層における重みとバイアスの修正量を求めます。 **重みの修正量 = - 学習係数 × $\delta_o$ × 入力** **バイアスの修正量 = - 学習係数 × $\delta_o$** 以下のコードは、これらのこれらの式を使って出力層の各パラメータを1回だけ更新します。
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
import random
iris = datasets.load_iris()
iris_data = iris.data
sl_data = iris_data[:100, 0] # SetosaとVersicolor、Sepal length
sw_data = iris_data[:100, 1] # SetosaとVersicolor、Sepal width
# 平均値を0に
sl_ave = np.average(sl_data) # 平均値
sl_data -= sl_ave # 平均値を引く
sw_ave = np.average(sw_data)
sw_data -= sw_ave
# 入力をリストに格納
train_data = []
for i in range(100): # iには0から99までが入る
correct = iris.target[i]
train_data.append([sl_data[i], sw_data[i], correct])
# シグモイド関数
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
# ニューロン
class Neuron:
def __init__(self): # 初期設定
self.input_sum = 0.0
self.output = 0.0
def set_input(self, inp):
self.input_sum += inp
def get_output(self):
self.output = sigmoid(self.input_sum)
return self.output
def reset(self):
self.input_sum = 0
self.output = 0
# ニューラルネットワーク
class NeuralNetwork:
def __init__(self): # 初期設定
# 重み
self.w_im = [[4.0, 4.0], [4.0, 4.0]] # 入力:2 ニューロン数:2
self.w_mo = [[1.0, -1.0]] # 入力:2 ニューロン数:1
# バイアス
self.b_m = [2.0, -2.0] # ニューロン数:2
self.b_o = [-0.5] # ニューロン数:1
# 各層の宣言
self.input_layer = [0.0, 0.0]
self.middle_layer = [Neuron(), Neuron()]
self.output_layer = [Neuron()]
def commit(self, input_data): # 実行
# 各層のリセット
self.input_layer[0] = input_data[0] # 入力層は値を受け取るのみ
self.input_layer[1] = input_data[1]
self.middle_layer[0].reset()
self.middle_layer[1].reset()
self.output_layer[0].reset()
# 入力層→中間層
self.middle_layer[0].set_input(self.input_layer[0] * self.w_im[0][0])
self.middle_layer[0].set_input(self.input_layer[1] * self.w_im[0][1])
self.middle_layer[0].set_input(self.b_m[0])
self.middle_layer[1].set_input(self.input_layer[0] * self.w_im[1][0])
self.middle_layer[1].set_input(self.input_layer[1] * self.w_im[1][1])
self.middle_layer[1].set_input(self.b_m[1])
# 中間層→出力層
self.output_layer[0].set_input(self.middle_layer[0].get_output() * self.w_mo[0][0])
self.output_layer[0].set_input(self.middle_layer[1].get_output() * self.w_mo[0][1])
self.output_layer[0].set_input(self.b_o[0])
return self.output_layer[0].get_output()
def train(self, correct):
# 学習係数
k = 0.3
# 出力
output_o = self.output_layer[0].output
output_m0 = self.middle_layer[0].output
output_m1 = self.middle_layer[1].output
# δ
delta_o = (output_o - correct) * output_o * (1.0 - output_o)
# パラメータの更新
self.w_mo[0][0] -= k * delta_o * output_m0
self.w_mo[0][1] -= k * delta_o * output_m1
self.b_o[0] -= k * delta_o
# ニューラルネットワークのインスタンス
neural_network = NeuralNetwork()
# 学習によるパラメータの変化
print("-------- Before train --------")
print(neural_network.w_im)
print(neural_network.w_mo)
print(neural_network.b_m)
print(neural_network.b_o)
neural_network.commit(train_data[0][:2]) # 順伝播
neural_network.train(train_data[0][2]) # 逆伝播
print("-------- After train --------")
print(neural_network.w_im)
print(neural_network.w_mo)
print(neural_network.b_m)
print(neural_network.b_o)
# コード練習用
###Output
_____no_output_____ |
APIConsumer.ipynb | ###Markdown
Prof. M.Sc. Howard RoattiCom o objetivo de trabalhar com APIs em Python, precisamos de ferramentas que realizem todas as requisições. A biblioteca mais comum em Python para esse tipo de tarefa é requests. A API requests não é nativa do Python, então você precisará instalá-la, caso já não tenha feito para poder iniciar. Você poderá instalar utilizando um dos comandos a seguir:pip install requestsconda install requestsGeralmente, as APIs Rest retornam para o requisitante um documento do tipo JSON. Para podermos usufruir de todas as funcionalidades, o Python possui uma biblioteca chamada json a qual fornece algumas ferramentas para trabalhar com esse tipo de documento. Caso não possua essa biblioteca, utilize um dos comandos a seguir para instalá-la.pip install jsonconda install jsonFeito isso, basta importar as duas bibliotecas para dentro de seu projeto para poder utilizá-las
###Code
import requests
import json
'''
Para esse projeto em específico, utilizaremos a biblioteca datetime para poder realizar
um impressão formatada de um dos dados recebidos via JSON
'''
from datetime import datetime
###Output
_____no_output_____
###Markdown
A NASA, agência espacial norte-americana, possui um portal que disponibiliza uma API Open Source com alguns dados interessantes sobre o espaço e naves espaciais. Nesse projeto, iremos consumir os dados dos astronautas no espaço nesse momento (http://api.open-notify.org/astros.json) e a previsão de quando a estação espacial estará passando por determinada região da terra (http://api.open-notify.org/iss-pass.json)No passo a seguir iremos utilizar o método get para recuperar as informações dos astronautas no espaço nesse momento. Em seguida iremos verificar o status dessa solicitação. As possíveis respostas para essa requisição são: 200: Tudo correu bem, e o resultado foi retornado (se houver). 301: O servidor está redirecionando você para um terminal diferente. Isso pode acontecer quando uma empresa alterna nomes de domínio ou um nome de terminal é alterado. 400: O servidor acha que você fez uma solicitação incorreta. Isso pode acontecer quando você não envia os dados corretos, entre outras coisas. 401: O servidor acha que você não está autenticado. Muitas APIs exigem credenciais de login, portanto, isso acontece quando você não envia as credenciais corretas para acessar uma API. 403: O recurso que você está tentando acessar é proibido: você não tem as permissões corretas para vê-lo. 404: O recurso que você tentou acessar não foi encontrado no servidor. 503: O servidor não está pronto para lidar com a solicitação.
###Code
response = requests.get("http://api.open-notify.org/astros.json")
print(response.status_code)
###Output
_____no_output_____
###Markdown
Caso o código de retorno do status seja 200, teremos um documento JSON com as informações dos astronautas e iremos imprimir o resultado.
###Code
print(response.json())
###Output
_____no_output_____
###Markdown
Aparentemente, os dados recuperados vem num formato de dicionário (objeto dict da linguagem Python). Podemos criar uma função que transforma esse retorno numa forma melhor legível, através do método dumps, passando alguns parâmetros de identação e ordenação.
###Code
def jsonprint(obj):
text = json.dumps(obj, sort_keys=True, indent=4)
print(text)
#Nesse passo iremos invocar o método recém criado para exibir seu resultado formatado.
jsonprint(response.json())
###Output
_____no_output_____
###Markdown
Algumas APIs permitem que realizemos consultas parametrizadas, é o caso da API que prediz quando a estação espacial estará passando por determinado ponto na terra. Para esse exemplo, iremos criar uma variável denominada parameters e passaremos a latitude e longitude da cidade de Nova York.
###Code
parameters = {
"lat": 40.71,
"lon": -74
}
###Output
_____no_output_____
###Markdown
Após criar o parâmetro para a personalização da consulta, utilizaremos novamente o método get, porém adicionaremos a cláusula params apontando para nossa variável parameters. Iremos exibir os resultados obtidos utilizando a função criada para formatação do JSON.
###Code
response = requests.get("http://api.open-notify.org/iss-pass.json", params=parameters)
jsonprint(response.json())
###Output
_____no_output_____
###Markdown
A partir dos resultados obtidos, iremos exibir apenas a duração e a hora em que a estação espacial estiver sobre os parâmetros informados, esses dados estão dispostos a partir da chave response
###Code
pass_times = response.json()['response']
jsonprint(pass_times)
#Nesse passo, criaremos uma lista apenas com os dados do horário de passagem e exibiremos da forma que fora recuperado
risetimes = []
for d in pass_times:
time = d['risetime']
risetimes.append(time)
print(risetimes)
#Já nesse passo, iremos formatar os dados do horário de passagem da estação espacial para algo mais inteligível aos humanos
times = []
for rt in risetimes:
time = datetime.fromtimestamp(rt)
times.append(time)
print(time)
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/001-Basic-Tutorials/002-Interactive-Widgets/Example - Beat Frequencies.ipynb | ###Markdown
Exploring Beat Frequencies using the `Audio` Object This example uses the `Audio` object and Matplotlib to explore the phenomenon of beat frequencies.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
from IPython.display import Audio, display
import numpy as np
def beat_freq(f1=220.0, f2=224.0):
max_time = 3
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
print(f1, f2, abs(f1-f2))
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
v.kwargs
f1, f2 = v.children
f1.value = 255
f2.value = 260
plt.plot(v.result[0:6000])
###Output
_____no_output_____ |
modules/07-real-time-analytics-with-spark-streaming/01-nltk.ipynb | ###Markdown
Natural Language Processing with Python - NLTK Installing the NLTK package: http://www.nltk.org/install.html
###Code
import nltk
###Output
_____no_output_____
###Markdown
**Installing NLTK data files (click at "Download" when prompted)**
###Code
nltk.download()
###Output
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
###Markdown
Tokenization Process of dividing a string into lists of chunks or "tokens", where a token is an entire part. For example: a word is a token in a sentence, and a sentence is a token in a paragraph.
###Code
from nltk.tokenize import sent_tokenize
import nltk.data
###Output
_____no_output_____
###Markdown
**Dividing a paragraph into sentences**
###Code
paragraph_en = 'Hi. Good to know that you are learning PLN. Thank you for being with us.'
paragraph_es = 'Hola. Es bueno saber que estás aprendiendo PLN. Gracias por estar con nosotros.'
sent_tokenize(paragraph_en)
sent_tokenize(paragraph_es)
tokenizer_en = nltk.data.load('tokenizers/punkt/PY3/english.pickle')
tokenizer_es = nltk.data.load('tokenizers/punkt/PY3/spanish.pickle')
tokenizer_en.tokenize(paragraph_en)
tokenizer_es.tokenize(paragraph_es)
tokenizer_en
tokenizer_es
###Output
_____no_output_____
###Markdown
**Dividing a sentence into words**
###Code
from nltk.tokenize import regexp_tokenize
from nltk.tokenize import RegexpTokenizer
from nltk.tokenize import TreebankWordTokenizer
from nltk.tokenize import WordPunctTokenizer
from nltk.tokenize import word_tokenize
word_tokenize('Data Science Rocks!')
tw_tokenizer = TreebankWordTokenizer()
tw_tokenizer.tokenize('Hello my friend.')
word_tokenize("I can't do that.")
wp_tokenizer = WordPunctTokenizer()
wp_tokenizer.tokenize("I can't do that.")
re_tokenizer = RegexpTokenizer("[\w']+")
re_tokenizer.tokenize("I can't do that.")
regexp_tokenize("I can't do that.", "[\w']+")
re_tokenizer = RegexpTokenizer('\s+', gaps = True)
re_tokenizer.tokenize("I can't do that.")
###Output
_____no_output_____
###Markdown
Training a Tokenizer
###Code
from nltk.tokenize import PunktSentenceTokenizer
from nltk.tokenize import sent_tokenize
from nltk.corpus import webtext
###Output
_____no_output_____
###Markdown
**NLTK file at /home/caio/nltk_data/corpora/webtext**
###Code
file = webtext.raw('overheard.txt')
ps_tokenizer = PunktSentenceTokenizer(file)
ps_tokenizer
sentences_ps = ps_tokenizer.tokenize(file)
sentences_ps[0]
sentences_st = sent_tokenize(file)
sentences_st[0]
sentences_st[678]
sentences_ps[678]
###Output
_____no_output_____
###Markdown
**Using the file path**
###Code
with open('/home/caio/nltk_data/corpora/webtext/overheard.txt', encoding = 'ISO-8859-2') as file:
file_text = file.read()
ps_tokenizer = PunktSentenceTokenizer(file_text)
sentences_ps = ps_tokenizer.tokenize(file_text)
sentences_ps[0]
sentences_ps[678]
###Output
_____no_output_____
###Markdown
Stopwords Stopwords are common words that normally don't contribute to a sentence meaning, at least with regard to the information purpose and natural language processing. They are words like "the" and "a". Many search engines filter these words to save space in their search indexes.
###Code
from nltk.corpus import stopwords
stops_en = set(stopwords.words('english'))
sentence_words = ["Can't", 'is', 'a', 'contraction']
[valid_word for valid_word in sentence_words if valid_word not in stops_en]
stops_pt = set(stopwords.words('portuguese'))
sentence_words = ['Data', 'Science', 'é', 'um', 'assunto', 'interessante']
[valid_word for valid_word in sentence_words if valid_word not in stops_pt]
###Output
_____no_output_____
###Markdown
**Stopwords Languages**
###Code
print(stopwords.fileids())
###Output
['arabic', 'azerbaijani', 'danish', 'dutch', 'english', 'finnish', 'french', 'german', 'greek', 'hungarian', 'indonesian', 'italian', 'kazakh', 'nepali', 'norwegian', 'portuguese', 'romanian', 'russian', 'slovene', 'spanish', 'swedish', 'tajik', 'turkish']
###Markdown
**Stopwords Portuguese Words**
###Code
print(stopwords.words('portuguese'))
###Output
['de', 'a', 'o', 'que', 'e', 'é', 'do', 'da', 'em', 'um', 'para', 'com', 'não', 'uma', 'os', 'no', 'se', 'na', 'por', 'mais', 'as', 'dos', 'como', 'mas', 'ao', 'ele', 'das', 'à', 'seu', 'sua', 'ou', 'quando', 'muito', 'nos', 'já', 'eu', 'também', 'só', 'pelo', 'pela', 'até', 'isso', 'ela', 'entre', 'depois', 'sem', 'mesmo', 'aos', 'seus', 'quem', 'nas', 'me', 'esse', 'eles', 'você', 'essa', 'num', 'nem', 'suas', 'meu', 'às', 'minha', 'numa', 'pelos', 'elas', 'qual', 'nós', 'lhe', 'deles', 'essas', 'esses', 'pelas', 'este', 'dele', 'tu', 'te', 'vocês', 'vos', 'lhes', 'meus', 'minhas', 'teu', 'tua', 'teus', 'tuas', 'nosso', 'nossa', 'nossos', 'nossas', 'dela', 'delas', 'esta', 'estes', 'estas', 'aquele', 'aquela', 'aqueles', 'aquelas', 'isto', 'aquilo', 'estou', 'está', 'estamos', 'estão', 'estive', 'esteve', 'estivemos', 'estiveram', 'estava', 'estávamos', 'estavam', 'estivera', 'estivéramos', 'esteja', 'estejamos', 'estejam', 'estivesse', 'estivéssemos', 'estivessem', 'estiver', 'estivermos', 'estiverem', 'hei', 'há', 'havemos', 'hão', 'houve', 'houvemos', 'houveram', 'houvera', 'houvéramos', 'haja', 'hajamos', 'hajam', 'houvesse', 'houvéssemos', 'houvessem', 'houver', 'houvermos', 'houverem', 'houverei', 'houverá', 'houveremos', 'houverão', 'houveria', 'houveríamos', 'houveriam', 'sou', 'somos', 'são', 'era', 'éramos', 'eram', 'fui', 'foi', 'fomos', 'foram', 'fora', 'fôramos', 'seja', 'sejamos', 'sejam', 'fosse', 'fôssemos', 'fossem', 'for', 'formos', 'forem', 'serei', 'será', 'seremos', 'serão', 'seria', 'seríamos', 'seriam', 'tenho', 'tem', 'temos', 'tém', 'tinha', 'tínhamos', 'tinham', 'tive', 'teve', 'tivemos', 'tiveram', 'tivera', 'tivéramos', 'tenha', 'tenhamos', 'tenham', 'tivesse', 'tivéssemos', 'tivessem', 'tiver', 'tivermos', 'tiverem', 'terei', 'terá', 'teremos', 'terão', 'teria', 'teríamos', 'teriam']
###Markdown
Wordnet WordNet is a lexical database (in english). It is a kind of dictionary created specifically for natural language processing.
###Code
from nltk.corpus import wordnet
syn = wordnet.synsets('cookbook')[0]
syn.name()
syn.definition()
wordnet.synsets('cooking')[0].examples()
###Output
_____no_output_____
###Markdown
Collocations Collocations are two or more words that tend to appear frequently together, such as "United States" or "Rio Grande do Sul". These words can generate different combinations and therefore the context is also important in natural language processing.
###Code
from nltk.collocations import BigramCollocationFinder
from nltk.corpus import stopwords
from nltk.corpus import webtext
from nltk.metrics import BigramAssocMeasures
words_lower = [word.lower() for word in webtext.words('grail.txt')]
bcf = BigramCollocationFinder.from_words(words_lower)
bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4)
stop_words = set(stopwords.words('english'))
bcf.apply_word_filter(lambda word: len(word) < 3 or word in stop_words)
bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4)
###Output
_____no_output_____
###Markdown
Stemming Words Stemming is the technique of removing suffixes and prefixes from a word, called "stem". For example, the stem of the word "cooking" is "cook". A good algorithm knows that "ing" is a suffix and can be removed.Stemming is widely used in search engines for indexing words. Instead of storing all the words forms, a search engine stores only the word stem, reducing the index size and increasing the search process performance.
###Code
from nltk.stem import LancasterStemmer
from nltk.stem import PorterStemmer
from nltk.stem import RegexpStemmer
from nltk.stem import SnowballStemmer
stemmer = PorterStemmer()
stemmer.stem('eating')
stemmer.stem('generously')
stemmer = LancasterStemmer()
stemmer.stem('eating')
stemmer.stem('generously')
stemmer = RegexpStemmer('ing')
stemmer.stem('eating')
print(SnowballStemmer.languages)
stemmer = SnowballStemmer('english')
stemmer.stem('eating')
stemmer.stem('generously')
###Output
_____no_output_____
###Markdown
Corpus Corpus is a collection of text documents and Corpora is the plural of Corpus.This term comes from the Latin word for "body" (in this case, the body of a text). A custom Corpus is a collection of text files organized in a directory. For the training of a custom model as part of a text classification process (such as text analysis), it is necessary to create your own Corpus and train it.
###Code
from nltk.corpus import brown
from nltk.corpus.reader import WordListCorpusReader
from nltk.tokenize import line_tokenize
###Output
_____no_output_____
###Markdown
**Creating a custom Corpus**
###Code
reader = WordListCorpusReader('.', ['aux/custom-corpus.txt'])
reader.words()
reader.fileids()
reader.raw()
line_tokenize(reader.raw())
print(brown.categories())
###Output
['adventure', 'belles_lettres', 'editorial', 'fiction', 'government', 'hobbies', 'humor', 'learned', 'lore', 'mystery', 'news', 'religion', 'reviews', 'romance', 'science_fiction']
|
backend/api/ocr/train/ocr/train-ocr.ipynb | ###Markdown
crnn ocr 模型训练
###Code
import os
import numpy as np
import torch
from PIL import Image
import numpy as np
from torch.autograd import Variable
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import numpy as np
# from warpctc_pytorch import CTCLoss
import torch.nn as nn
import sys
print (os.getcwd())
###Output
/home/nhydev/github/chineseocr/chineseocr/train/ocr
###Markdown
创建数据软连接
###Code
!ln -s /tmp/ICDR2019/train_images ../data/ocr/1
###Output
_____no_output_____
###Markdown
加载数据集
###Code
import os
os.chdir('../../')
from train.ocr.dataset import PathDataset,randomSequentialSampler,alignCollate
from glob import glob
from sklearn.model_selection import train_test_split
roots = glob('./train/data/ocr/*/*.jpg')
print (roots)
print (os.getcwd())
###Output
/home/nhydev/github/chineseocr/chineseocr
###Markdown
训练字符集
###Code
alphabetChinese = '1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
trainP,testP = train_test_split(roots,test_size=0.1)##此处未考虑字符平衡划分
traindataset = PathDataset(trainP,alphabetChinese)
testdataset = PathDataset(testP,alphabetChinese)
batchSize = 32
workers = 1
imgH = 32
imgW = 280
keep_ratio = True
cuda = True
ngpu = 1
nh =256
sampler = randomSequentialSampler(traindataset, batchSize)
train_loader = torch.utils.data.DataLoader(
traindataset, batch_size=batchSize,
shuffle=False, sampler=None,
num_workers=int(workers),
collate_fn=alignCollate(imgH=imgH, imgW=imgW, keep_ratio=keep_ratio))
train_iter = iter(train_loader)
###Output
_____no_output_____
###Markdown
加载预训练模型权重
###Code
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
from crnn.network_torch import CRNN
from config import ocrModel,LSTMFLAG,GPU
model = CRNN(32, 1, len(alphabetChinese)+1, 256, 1,lstmFlag=LSTMFLAG)
model.apply(weights_init)
preWeightDict = torch.load(ocrModel,map_location=lambda storage, loc: storage)##加入项目训练的权重
modelWeightDict = model.state_dict()
for k, v in preWeightDict.items():
name = k.replace('module.','') # remove `module.`
if 'rnn.1.embedding' not in name:##不加载最后一层权重
modelWeightDict[name] = v
model.load_state_dict(modelWeightDict)
model
##优化器
from crnn.util import strLabelConverter
lr = 0.1
optimizer = optim.Adadelta(model.parameters(), lr=lr)
converter = strLabelConverter(''.join(alphabetChinese))
# criterion = CTCLoss()
criterion = nn.CTCLoss()
print (torch.cuda.is_available())
from train.ocr.dataset import resizeNormalize
from crnn.util import loadData
image = torch.FloatTensor(batchSize, 3, imgH, imgH)
text = torch.IntTensor(batchSize * 5)
length = torch.IntTensor(batchSize)
if torch.cuda.is_available():
model.cuda()
model = torch.nn.DataParallel(model, device_ids=[0])##转换为多GPU训练模型
image = image.cuda()
criterion = criterion.cuda()
def trainBatch(net, criterion, optimizer,cpu_images, cpu_texts):
#data = train_iter.next()
#cpu_images, cpu_texts = data
batch_size = cpu_images.size(0)
loadData(image, cpu_images)
t, l = converter.encode(cpu_texts)
loadData(text, t)
loadData(length, l)
preds = net(image)
preds_size = Variable(torch.IntTensor([preds.size(0)] * batch_size))
cost = criterion(preds, text, preds_size, length) / batch_size
net.zero_grad()
cost.backward()
optimizer.step()
return cost
def predict(im):
"""
预测
"""
image = im.convert('L')
scale = image.size[1]*1.0 / 32
w = image.size[0] / scale
w = int(w)
transformer = resizeNormalize((w, 32))
image = transformer(image)
if torch.cuda.is_available():
image = image.cuda()
image = image.view(1, *image.size())
image = Variable(image)
preds = model(image)
_, preds = preds.max(2)
preds = preds.transpose(1, 0).contiguous().view(-1)
preds_size = Variable(torch.IntTensor([preds.size(0)]))
sim_pred = converter.decode(preds.data, preds_size.data, raw=False)
return sim_pred
def val(net, dataset, max_iter=100):
for p in net.parameters():
p.requires_grad = False
net.eval()
i = 0
n_correct = 0
N = len(dataset)
max_iter = min(max_iter, N)
for i in range(max_iter):
im,label = dataset[np.random.randint(0,N)]
if im.size[0]>1024:
continue
pred = predict(im)
if pred.strip() ==label:
n_correct += 1
accuracy = n_correct / float(max_iter )
return accuracy
from train.ocr.generic_utils import Progbar
##进度条参考 https://github.com/keras-team/keras/blob/master/keras/utils/generic_utils.py
###Output
_____no_output_____
###Markdown
模型训练
###Code
print (len(train_loader))
###Output
158
###Markdown
冻结预训练模型层参数
###Code
nepochs = 10
acc = 0
interval = len(train_loader)//2##评估模型
for i in range(nepochs):
print('epoch:{}/{}'.format(i,nepochs))
n = len(train_loader)
pbar = Progbar(target=n)
train_iter = iter(train_loader)
loss = 0
for j in range(n):
for p in model.named_parameters():
p[1].requires_grad = True
if 'rnn.1.embedding' in p[0]:
p[1].requires_grad = True
else:
p[1].requires_grad = False##冻结模型层
model.train()
cpu_images, cpu_texts = train_iter.next()
cost = trainBatch(model, criterion, optimizer,cpu_images, cpu_texts)
loss += cost.data.cpu().numpy()
if (j+1)%interval==0:
curAcc = val(model, testdataset, max_iter=1024)
if curAcc>acc:
acc = curAcc
torch.save(model.state_dict(), 'train/ocr/modellstm.pth')
pbar.update(j+1,values=[('loss',loss/((j+1)*batchSize)),('acc',acc)])
###Output
epoch:0/10
###Markdown
释放模型层参数
###Code
nepochs = 10
#acc = 0
interval = len(train_loader)//2##评估模型
for i in range(10,10+nepochs):
print('epoch:{}/{}'.format(i,nepochs))
n = len(train_loader)
pbar = Progbar(target=n)
train_iter = iter(train_loader)
loss = 0
for j in range(n):
for p in model.named_parameters():
p[1].requires_grad = True
model.train()
cpu_images, cpu_texts = train_iter.next()
cost = trainBatch(model, criterion, optimizer,cpu_images, cpu_texts)
loss += cost.data.numpy()
if (j+1)%interval==0:
curAcc = val(model, testdataset, max_iter=1024)
if curAcc>acc:
acc = curAcc
torch.save(model.state_dict(), 'train/ocr/modellstm.pth')
pbar.update(j+1,values=[('loss',loss/((j+1)*batchSize)),('acc',acc)])
###Output
_____no_output_____
###Markdown
预测demo
###Code
model.eval()
N = len(testdataset)
im,label = testdataset[np.random.randint(0,N)]
pred = predict(im)
print('true:{},pred:{}'.format(label,pred))
im
###Output
_____no_output_____ |
notebooks/1.0-initial-data-exploration.ipynb | ###Markdown
5-Minute Craft Youtube Video TitlesTeam members: Douglas Greaves, Julio Oliveira, Satoshi TaniguchiData Source: https://www.kaggle.com/shivamb/5minute-crafts-video-views-datasetThis notebook aims to explore characteristics and generate insights from video titles of the Youtube channel [5-Minute Crafts](https://www.youtube.com/channel/UC295-Dw_tDNtZXFeAPAW6Aw)
###Code
import pickle
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import nltk
from sklearn.cluster import KMeans
from yellowbrick.cluster import KElbowVisualizer
from sklearn.preprocessing import StandardScaler
from sklearn.feature_extraction.text import TfidfVectorizer
pd.options.display.float_format = '{:.2f}'.format
###Output
_____no_output_____
###Markdown
Exploring the Dataset
###Code
df = pd.read_csv('../data/external/5-Minute Crafts.csv')
###Output
_____no_output_____
###Markdown
The dataset consists of the video_id, title, and previously calculated metrics.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Total number of rows and columns:
###Code
df.shape
###Output
_____no_output_____
###Markdown
Checking the video ids, we can confirm that all videos are unique.
###Code
len(df.video_id.unique())
###Output
_____no_output_____
###Markdown
Although all videos ids are unique, a small amount of video titles repeats.
###Code
len(df.title.unique())
###Output
_____no_output_____
###Markdown
The dataset does not have any null values
###Code
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Let's check some descriptive statistics about the videos:* Although the channel is called 5-minutes, 96% of the videos have a duration higher than 6 minutes, and 75% more than 11 minutes.* The channel has a video with over 280M views. 75% of the videos have more than 567K views.* The mean number of words is 8 on the video title.* The average word length is 5.46, wich is close to the average length of English words of 5.1 according to [WolframAlpha](https://www.wolframalpha.com/input/?i=average+english+word+length)* 93% of their videos have a digit in the title
###Code
df[['duration_seconds', 'total_views', 'num_words','avg_word_len', 'contain_digits']].describe()
df[df.duration_seconds > 360].shape[0] / df.shape[0]
###Output
_____no_output_____
###Markdown
DistributionEvaluating the distribution of active days since, we can notice that the dataset does not provide exact dates for when the video was published. This happens possibly because of how Youtube shows on its page the time since the video was published.The number of videos with 'active days since' less than 365 is tiny. Therefore, comparing those videos with older ones may be misleading. For this reason, we drop them from the dataset.
###Code
accumulated_percent = df.active_since_days.value_counts().sort_index() / df.active_since_days.count()
accumulated_percent = accumulated_percent.cumsum()
fig, ax = plt.subplots(figsize=(12, 6))
ax.plot(accumulated_percent.index, accumulated_percent)
ax.set(xlabel="Active days since",
ylabel="Frequency",
title="Accumalated distribution of Active days since")
accumulated_percent
df = df[df.active_since_days >= 365]
df.shape
###Output
_____no_output_____
###Markdown
We first expected that older videos would have higher total views, however looking into the box plot of 'Total Views x 'Active Days Since', there is no clear distinction between those groups. This can be related to many reasons, one of them being the subscriber's growth rate of the channel over time.
###Code
fig, ax = plt.subplots(figsize=(12, 6))
ax.set(xlabel="Total Views",
ylabel="Active Days Since",
title="Total Views / Active Days since")
ax = sns.boxplot(x="active_since_days", y="total_views",
data=df, ax = ax, palette=['gray','blue'], showfliers=False)
###Output
_____no_output_____
###Markdown
As we noticed in the descriptive statistics of the dataset, the distribution of videos duration skewed to the right.
###Code
fig, ax = plt.subplots(figsize=(12, 6))
ax.hist(df.duration_seconds, bins=100)
ax.set(xlabel="Duration seconds",
ylabel="Frequency",
title="Distribution of duration seconds")
df.reset_index(inplace=True, drop=True)
###Output
_____no_output_____
###Markdown
Feature EngineeringThe features 'num_words_uppercase' and 'num_words_lowercase' are very similar to each other, for this reason we deleted them and created a feature called represint the percentual of uppercase words, given the total of words in the title.
###Code
df['perc_uppercase'] = df.num_words_uppercase / df.num_words
df.drop(['num_words_uppercase','num_words_lowercase'],axis=1, inplace=True)
original_features = df.columns
###Output
_____no_output_____
###Markdown
To evaluate the **relevance** of the words, we removed the stopwords and calculated the TF-IDF(term frequency weighted by inverse document frequency).
###Code
df['title'] = df.title.str.lower()
nltk.download('stopwords')
stopwords = nltk.corpus.stopwords.words('english')
vectorizer = TfidfVectorizer(stop_words=stopwords, token_pattern=r"\b[^\d\W]+\b")
sdf = pd.DataFrame.sparse.from_spmatrix(vectorizer.fit_transform(df.title),
columns=vectorizer.get_feature_names_out())
df = df.join(sdf)
###Output
_____no_output_____
###Markdown
ClusteringWe noted before that the videos have different behavior for active_since_days, duration_seconds, and total_views. Therefore we used Kmeans to split the videos into a cluster based on these 3 variables. NormalizationThe 3 features selected have very different magnitude. For this reason, before executing the Kmeans, we will use the Standard Scaler to normalize the data.
###Code
X = df[['active_since_days', 'duration_seconds','total_views']].copy()
X = StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Using the elbow technique, we could notice that the best number of clusters is 9.
###Code
kmeans = KMeans()
elbow = KElbowVisualizer(kmeans, k=(4,16))
elbow.fit(X)
elbow.show()
# Uncoment the lines below for retraining the model.
# Attention, the cluster names steps below will need to be renamed.
# kmeans = KMeans(elbow.elbow_value_)
# kmeans.fit(X)
# with open('../models/kmeans.pkl', 'wb') as file:
# pickle.dump(kmeans, file)
with open('../models/kmeans.pkl', 'rb') as file:
kmeans = pickle.load(file)
###Output
_____no_output_____
###Markdown
Looking into the size of each cluster, we can notice that some clusters are tiny. We will focus our study on the clusters with more than 100 videos.
###Code
df['cluster'] = kmeans.predict(X)
df.cluster.value_counts()
MIN_LEN = 100
clusters_len = df.cluster.value_counts().sort_index()
clusters_filter = {i for i,value in enumerate(clusters_len) if value > MIN_LEN}
clusters_filter
filtered_clusters = df[df.cluster.isin(clusters_filter)].copy()
###Output
_____no_output_____
###Markdown
Now that we filtered the clusters, we can visualize them using the 3 features selected to understand what they represent.
###Code
cols_to_plot = ['active_since_days', 'duration_seconds',
'total_views', 'cluster']
g = sns.PairGrid(filtered_clusters[cols_to_plot], hue="cluster", palette="Paired")
g.map_diag(sns.histplot)
g.map_offdiag(sns.scatterplot)
g.add_legend()
###Output
_____no_output_____
###Markdown
We named each cluster by looking into the characteristics in the chart above.
###Code
cluster_names = {
0: '3 years ',
2: '1 years',
4: 'Long duration',
7: '2 years',
8: 'Top performers',
5: '4 years'
}
filtered_clusters['cluster_name'] = df.cluster.map(cluster_names)
###Output
/tmp/ipykernel_14097/3347239372.py:10: PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance. Consider joining all columns at once using pd.concat(axis=1) instead. To get a de-fragmented frame, use `newframe = frame.copy()`
filtered_clusters['cluster_name'] = df.cluster.map(cluster_names)
###Markdown
Text MiningFor our first exploration of the title, we created the same chart above but evaluated the title features for each cluster.
###Code
cols_to_plot = ['num_words', 'num_punctuation',
'num_stopwords', 'avg_word_len', 'contain_digits',
'perc_uppercase', 'cluster_name']
g = sns.PairGrid(filtered_clusters[cols_to_plot], hue="cluster_name", palette='Paired')
g.map_diag(sns.histplot)
g.map_offdiag(sns.scatterplot)
g.add_legend()
###Output
_____no_output_____
###Markdown
Evaluating the chart above, we can notice some interesting points on how the youtube channel changed the title strategy over time. Insights* 1 year videos: more words, more punctuation* 3 years videos: more uppercase words* 4 years digits: more titles containing digits* Top performers: less punctuationLet's explore these insights.
###Code
fig, ax = plt.subplots(figsize=(12, 6))
ax.set(xlabel="Number of Words",
ylabel="Active Days Since",
title="Number of Words / Active Days Since")
data = df[df.active_since_days.isin({365, 1460})]
ax = sns.boxplot(x="active_since_days", y="num_words",
data=data, ax = ax, palette=['gray','blue'])
df.groupby('active_since_days')['contain_digits'].mean()
fig, ax = plt.subplots(figsize=(12, 6))
ax.set(xlabel="Percentage of Uppercase",
ylabel="Active Days Since",
title="Percentage of uppercase / Active Days since")
data = df
ax = sns.boxplot(x="active_since_days", y="perc_uppercase",
data=data, ax = ax, palette=['gray','blue'])
cols_to_drop = list(original_features)+['cluster_name', 'cluster']
for c in filtered_clusters.cluster_name.unique():
print(f'Evaluating most relevant words for each cluster: {c}')
cluster_data = filtered_clusters[filtered_clusters.cluster_name == c].copy()
cluster_data = cluster_data.drop(cols_to_drop, axis=1)
print(cluster_data.mean().sort_values()[-10::])
print('---------------\n')
###Output
Evaluating most relevant words for each cluster: 1 years
know 0.02
crazy 0.03
cool 0.03
home 0.03
try 0.03
life 0.03
tricks 0.03
diy 0.04
ideas 0.06
hacks 0.07
dtype: float64
---------------
Evaluating most relevant words for each cluster: Top performers
know 0.02
crazy 0.02
simple 0.02
ideas 0.02
cool 0.02
tricks 0.03
beauty 0.03
make 0.03
life 0.07
hacks 0.10
dtype: float64
---------------
Evaluating most relevant words for each cluster: Long duration
top 0.04
time 0.04
epic 0.04
crafts 0.04
beauty 0.05
compilation 0.08
life 0.09
best 0.10
live 0.11
hacks 0.11
dtype: float64
---------------
Evaluating most relevant words for each cluster: 2 years
easy 0.02
make 0.02
cool 0.02
know 0.03
tricks 0.03
beauty 0.03
crazy 0.04
ideas 0.04
life 0.05
hacks 0.10
dtype: float64
---------------
Evaluating most relevant words for each cluster: 3 years
try 0.02
ideas 0.02
cool 0.02
tips 0.02
make 0.02
know 0.03
tricks 0.03
easy 0.03
life 0.05
hacks 0.09
dtype: float64
---------------
Evaluating most relevant words for each cluster: 4 years
tricks 0.02
know 0.02
ways 0.03
tips 0.03
make 0.03
life 0.04
hacks 0.05
l 0.06
minute 0.06
crafts 0.06
dtype: float64
---------------
###Markdown
Insights* Long duration has relevant words like: "Live", "Compilation"* Comparing 4 years to 1 year: "craft" and "minute" used to the most relevant words in the title, what changed in the most recent videos. Are there words correlated to the video performance?
###Code
correlation = df.corr()
correlation.sort_values('total_views').total_views
###Output
_____no_output_____
###Markdown
Looking into the most positive correlated words, we notice that the R score is not high. However, the correlation between the tf-idf of the word "fortune" and total views is higher than "active_since_days." We will explore more the titles with this word below.
###Code
df['has_fortune'] = df.fortune > 0
fortune_metrics = df.groupby(['active_since_days','has_fortune'])['total_views'].mean()
fortune_metrics
fig, ax = plt.subplots(figsize=(12, 6))
ax.set(xlabel="Total Views",
ylabel="Active Days Since",
title="Total Views / Active Days since")
ax.scatter(df[df.has_fortune == False].active_since_days, df[df.has_fortune == False].total_views,c='gray', zorder=1)
ax.scatter(df[df.has_fortune == True].active_since_days, df[df.has_fortune == True].total_views,c='blue', zorder=10)
###Output
_____no_output_____
###Markdown
In the chart above, we evaluated the mean values of total_views of videos with the word fortune vs. the other videos in the same year. The videos with the word fortune are considerably higher than.
###Code
df[df.fortune > 0][['title','total_views']]
###Output
_____no_output_____
###Markdown
Sample notebookExample formula.$$MSE = \frac{1}{m} \sum_{i=1}^{m} (h_{\theta}(x)^{(i)} - y^{(i)})^2$$
###Code
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
print(np.array([4, 5, 6]))
df = pd.DataFrame()
clf = DecisionTreeClassifier()
###Output
[4 5 6]
###Markdown
Exploring the file with coordinates etc
###Code
data = np.load("../Data/C1-371_new/C1-371_processed.npy").T
df = pd.DataFrame(data=data, columns=["x_pos","y_pos","z_pos","lat","lon"])
sns.pairplot(df, corner=True, diag_kind="kde")
data[:,:3].shape
pca = PCA(n_components=2).fit_transform(data[:,:3])
tsne = TSNE(n_components=2, init='pca').fit_transform(data[:,:3])
plt.subplot(1,2,1)
sns.scatterplot(x=pca[:,0], y=pca[:,1])
plt.subplot(1,2,2)
sns.scatterplot(x=tsne[:,0], y=tsne[:,1])
###Output
_____no_output_____
###Markdown
Distance from Zero point
###Code
df['distance_from_zero'] = df.apply(lambda row: np.sqrt(row.x_pos**2 + row.y_pos**2 +row.z_pos**2), axis=1)
df.distance_from_zero
###Output
_____no_output_____
###Markdown
Pairwise distance
###Code
example = data[:3,:229]
example.shape
distance = []
for i in range(example.shape[1]):
for j in range(1, example.shape[1]):
dist = np.sqrt(np.sum(example[:,i] - example[:,j])**2)
distance.append(dist)
distance = np.array(distance)
distance -= np.mean(distance)
distance /= np.std(distance)
sns.distplot(distance)
###Output
_____no_output_____
###Markdown
Exploring Measurements
###Code
client = MongoClient()
db = client.ChannelCharting
collection = db.measurements
cursor = collection.find({},
{"real": 1, "imag":1},
limit=0
)
full_data = np.stack([np.array(list(zip(data['real'], data['imag']))) for data in cursor])
plt.plot(full_data[:,:,0])
###Output
_____no_output_____
###Markdown
Kmeans
###Code
from sklearn.cluster import KMeans
kmean = KMeans()
res = kmean.fit_transform(full_data.reshape(-1, 9009*2))
res.shape
means = np.mean(res, axis=0)
plt.plot(res[1])
example = np.sqrt(full_data[:,:,1]**2 + full_data[:,:,0]**2)
channel_distance = []
for i in range(example.shape[0]):
for j in range(1, example.shape[0]):
dist = np.sqrt(np.sum(example[:,i] - example[:,j])**2)
channel_distance.append(dist)
len(channel_distance)
channel_distance = np.array(channel_distance)
channel_distance -= np.mean(channel_distance)
channel_distance /= np.std(channel_distance)
sns.distplot(channel_distance[:50])
sns.distplot(channel_distance[50:])
plt.plot(distance)
plt.plot(channel_distance)
###Output
_____no_output_____
###Markdown
Try a PCA
###Code
import sklearn.decomposition
dataset_full = full_data.reshape(229, 9009*2)
pca = sklearn.decomposition.PCA(n_components=0.5, svd_solver='full')
reduced = pca.fit_transform(full_data[:,:,0])
plt.plot(reduced[:])
###Output
_____no_output_____
###Markdown
Autoencoder
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch.autograd import Variable
from sklearn.model_selection import train_test_split
dataset = torch.Tensor(dataset_full)
train, test = train_test_split(dataset)
dataset.shape
train_loader = DataLoader(train, batch_size=64)
test_loader = DataLoader(test)
class AE(nn.Module):
def __init__(self):
super(AE, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(dataset.shape[1], 24),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(24, 8),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(8,2),
nn.Softplus()
)
self.decoder = nn.Sequential(
nn.Linear(2,8),
nn.ReLU(True),
nn.Linear(8, 24),
nn.ReLU(True),
nn.Linear(24, dataset.shape[1])
)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
model = AE()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
for e in range(30):
#train
for data in train_loader:
model.train()
x = Variable(data)
#forward pass
output = model(x)
train_loss = criterion(output, x)
#backprop
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
#eval
for data in test_loader:
model.eval()
x = Variable(data)
output = model(x)
test_loss = criterion(output, x)
#log
print('epoch [{}/{}], train_loss:{:.4f}'.format(e + 1, 20, train_loss.item()))
print('epoch [{}/{}], test_loss:{:.4f}'.format(e + 1, 20, test_loss.item()))
encoding_1 = model.encoder(dataset[0:1])
encoding_200 = model.encoder(dataset[100:101])
encoding_1, encoding_200
df
reconstructed = model(test)
plt.plot(reconstructed[0].detach().numpy())
plt.plot(test[0].detach().numpy())
plt.plot(torch.mean(test-reconstructed, axis=0).detach().numpy())
###Output
_____no_output_____ |
Sections/Section1-Spark-Basics/2.SparkSQL/1.SparkSQL.ipynb | ###Markdown
Dataframes * Dataframes are a restricted sub-type of RDDs. * Restircing the type allows for more optimization. * Dataframes store two dimensional data, similar to the type of data stored in a spreadsheet. * Each column in a dataframe can have a different type. * Each row contains a `record`. * Similar to, but not the same as, [pandas dataframes](http://pandas.pydata.org/pandas-docs/stable/dsintro.htmldataframe) and [R dataframes](http://www.r-tutor.com/r-introduction/data-frame)
###Code
from pyspark import SparkContext
sc = SparkContext(master="local[4]")
sc.version
import os
import sys
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType
%pylab inline
# Just like using Spark requires having a SparkContext, using SQL requires an SQLContext
sqlContext = SQLContext(sc)
sqlContext
###Output
_____no_output_____
###Markdown
Constructing a DataFrame from an RDD of RowsEach Row defines it's own fields, the schema is *inferred*.
###Code
# One way to create a DataFrame is to first define an RDD from a list of Rows
some_rdd = sc.parallelize([Row(name=u"John", age=19),
Row(name=u"Smith", age=23),
Row(name=u"Sarah", age=18)])
some_rdd.collect()
# The DataFrame is created from the RDD or Rows
# Infer schema from the first row, create a DataFrame and print the schema
some_df = sqlContext.createDataFrame(some_rdd)
some_df.printSchema()
# A dataframe is an RDD of rows plus information on the schema.
# performing **collect()* on either the RDD or the DataFrame gives the same result.
print(type(some_rdd),type(some_df))
print('some_df =',some_df.collect())
print('some_rdd=',some_rdd.collect())
###Output
<class 'pyspark.rdd.RDD'> <class 'pyspark.sql.dataframe.DataFrame'>
some_df = [Row(age=19, name='John'), Row(age=23, name='Smith'), Row(age=18, name='Sarah')]
some_rdd= [Row(age=19, name='John'), Row(age=23, name='Smith'), Row(age=18, name='Sarah')]
###Markdown
Defining the Schema explicitlyThe advantage of creating a DataFrame using a pre-defined schema allows the content of the RDD to be simple tuples, rather than rows.
###Code
# In this case we create the dataframe from an RDD of tuples (rather than Rows) and provide the schema explicitly
another_rdd = sc.parallelize([("John", 19), ("Smith", 23), ("Sarah", 18)])
# Schema with two fields - person_name and person_age
schema = StructType([StructField("person_name", StringType(), False),
StructField("person_age", IntegerType(), False)])
# Create a DataFrame by applying the schema to the RDD and print the schema
another_df = sqlContext.createDataFrame(another_rdd, schema)
another_df.printSchema()
# root
# |-- age: binteger (nullable = true)
# |-- name: string (nullable = true)
###Output
root
|-- person_name: string (nullable = false)
|-- person_age: integer (nullable = false)
###Markdown
Loading DataFrames from diskThere are many maethods to load DataFrames from Disk. Here we will discuss three of these methods1. Parquet2. JSON (on your own)3. CSV (on your own)In addition, there are API's for connecting Spark to an external database. We will not discuss this type of connection in this class. Loading dataframes from JSON files[JSON](http://www.json.org/) is a very popular readable file format for storing structured data.Among it's many uses are **twitter**, `javascript` communication packets, and many others. In fact this notebook file (with the extension `.ipynb` is in json format. JSON can also be used to store tabular data and can be easily loaded into a dataframe.
###Code
!wget 'https://mas-dse-open.s3.amazonaws.com/Moby-Dick.txt' -P ../../Data/
# when loading json files you can specify either a single file or a directory containing many json files.
path = "../../Data/people.json"
!cat $path
# Create a DataFrame from the file(s) pointed to by path
people = sqlContext.read.json(path)
print('people is a',type(people))
# The inferred schema can be visualized using the printSchema() method.
people.show()
people.printSchema()
###Output
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
people is a <class 'pyspark.sql.dataframe.DataFrame'>
+----+-------+
| age| name|
+----+-------+
|null|Michael|
| 30| Andy|
| 19| Justin|
+----+-------+
root
|-- age: long (nullable = true)
|-- name: string (nullable = true)
###Markdown
Excercise: Loading csv files into dataframesSpark 2.0 includes a facility for reading csv files. In this excercise you are to create similar functionality using your own code.You are to write a class called `csv_reader` which has the following methods:* `__init__(self,filepath):` recieves as input the path to a csv file. It throws an exeption `NoSuchFile` if the file does not exist.* `Infer_Schema()` opens the file, reads the first 10 lines (or less if the file is shorter), and infers the schema. The first line of the csv file defines the column names. The following lines should have the same number of columns and all of the elements of the column should be of the same type. The only types allowd are `int`,`float`,`string`. The method infers the types of the columns, checks that they are consistent, and defines a dataframe schema of the form:```pythonschema = StructType([StructField("person_name", StringType(), False), StructField("person_age", IntegerType(), False)])```If everything checks out, the method defines a `self.` variable that stores the schema and returns the schema as it's output. If an error is found an exception `BadCsvFormat` is raised.* `read_DataFrame()`: reads the file, parses it and creates a dataframe using the inferred schema. If one of the lines beyond the first 10 (i.e. a line that was not read by `InferSchema`) is not parsed correctly, the line is not added to the Dataframe. Instead, it is added to an RDD called `bad_lines`.The methods returns the dateFrame and the `bad_lines` RDD. Parquet files * [Parquet](http://parquet.apache.org/) is a popular columnar format. * Spark SQL allows [SQL](https://en.wikipedia.org/wiki/SQL) queries to retrieve a subset of the rows without reading the whole file. * Compatible with HDFS : allows parallel retrieval on a cluster. * Parquet compresses the data in each column. Spark and Hive* Parquet is a **file format** not an independent database server.* Spark can work with the [Hive](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started) relational database system that supports the full array of database operations.* Hive is compatible with HDFS.
###Code
dir='../../Data'
parquet_file=dir+"/users.parquet"
!ls $dir
#load a Parquet file
print(parquet_file)
df = sqlContext.read.load(parquet_file)
df.show()
df2=df.select("name", "favorite_color")
df2.show()
outfilename="namesAndFavColors.parquet"
!rm -rf $dir/$outfilename
df2.write.save(dir+"/"+outfilename)
!ls -ld $dir/$outfilename
###Output
drwxrwxrwx 1 jovyan staff 4096 Apr 9 2018 ../../Data/namesAndFavColors.parquet
###Markdown
A new interface object has been added in **Spark 2.0** called **SparkSession**. A spark session is initialized using a `builder`. For example```pythonspark = SparkSession.builder \ .master("local") \ .appName("Word Count") \ .config("spark.some.config.option", "some-value") \ .getOrCreate()```Using a SparkSession a Parquet file is read [as follows:](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.htmlpyspark.sql.DataFrameReader.parquet):```pythondf = spark.read.parquet('python/test_support/sql/parquet_partitioned')``` Lets have a look at a real-world dataframeThis dataframe is a small part from a large dataframe (15GB) which stores meteorological data from stations around the world. We will read the dataframe from a zipped parquet file.
###Code
from os.path import split,join,exists
from os import mkdir,getcwd,remove
from glob import glob
# create directory if needed
notebook_dir=getcwd()
data_dir=join(split(split(notebook_dir)[0])[0],'Data')
weather_dir=join(data_dir,'Weather')
if exists(weather_dir):
print('directory',weather_dir,'already exists')
else:
print('making',weather_dir)
mkdir(weather_dir)
file_index='NY'
zip_file='%s.tgz'%(file_index) #the .csv extension is a mistake, this is a pickle file, not a csv file.
old_files='%s/%s*'%(weather_dir,zip_file[:-3])
for f in glob(old_files):
print('removing',f)
!rm -rf {f}
command="wget https://mas-dse-open.s3.amazonaws.com/Weather/by_state/%s -P %s "%(zip_file, weather_dir)
print(command)
!$command
!ls -lh $weather_dir/$zip_file
#extracting the parquet file
!tar zxvf {weather_dir}/{zip_file} -C {weather_dir}
weather_parquet = join(weather_dir, zip_file[:-3]+'parquet')
print(weather_parquet)
df = sqlContext.read.load(weather_parquet)
df.show(1)
#selecting a subset of the rows so it fits in slide.
df.select('station','year','measurement').show(5)
###Output
+-----------+----+-----------+
| station|year|measurement|
+-----------+----+-----------+
|USC00303452|1903| PRCP|
|USC00303452|1904| PRCP|
|USC00303452|1905| PRCP|
|USC00303452|1906| PRCP|
|USC00303452|1907| PRCP|
+-----------+----+-----------+
only showing top 5 rows
|
notebooks/AV/AV001-UsingFE012.ipynb | ###Markdown
AV Against FE012
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import lightgbm as lgb
from sklearn.model_selection import KFold
from sklearn import model_selection, preprocessing, metrics
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import shap
import os
#print(os.listdir("../input"))
from sklearn import preprocessing
#import xgboost as xgb
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
from tqdm import tqdm
# Any results you write to the current directory are saved as output.
train = pd.read_parquet('../../data/train_FE013.parquet')
test = pd.read_parquet('../../data/test_FE013.parquet')
FEATURES = ['V85', 'bank_type_TransactionAmt_mean', 'D5_fq_enc', 'V12',
'V81', 'V282', 'bank_type_D7_std', 'id_15', 'V13', 'C12_fq_enc', 'anomaly',
'D7_DT_D_std_score', 'D3_DT_D_min_max', 'card4_count_full',
'D14_DT_D_min_max', 'card1_count_full', 'V169', 'D3_DT_M_min_max', 'V279',
'V91', 'bank_type_D10_std', 'D14', 'D6_DT_M_std_score', 'D4_DT_W_min_max',
'V152', 'V56',
#'D3_intercept_bin0',
'D14_intercept_bin0', 'V220', 'V277',
'D12_intercept', 'ProductCD_W_00cents', 'D13_intercept_bin0', 'V291', 'V189',
'D15_DT_M_min_max', 'C5_fq_enc', 'D3_fq_enc', 'card5_fq_enc',
'addr1_count_full', 'V266', 'D11_intercept_bin2', 'V23', 'D4_intercept_bin3',
'bank_type_D10_mean', 'D2_intercept_bin3', 'V306', 'DeviceType', 'V285',
'D5_DT_W_std_score', 'V131', 'V37', 'V296', 'bank_type_D1_mean', 'V75',
'D3_DT_W_std_score', 'D10_DT_M_min_max', 'id_33_0', 'V67',
'D4_intercept_bin4', 'V256', 'V143', 'uid5_D6_std', 'ProductCD_target_mean',
'mxC3', 'V129', 'D13_DT_M_std_score', 'V24', 'D3_DT_M_std_score', 'mxC4',
'D9', 'id_30_version_fq_enc', 'D5_DT_D_std_score', 'D11_DT_M_std_score',
'uid5_D6_mean', 'D14_DT_M_std_score', 'card5_TransactionAmt_std', 'V20',
'C8_fq_enc', 'V70', 'V127', 'D6_intercept', '# ',
'sum_Cxx_binary_higher_than_q95', 'V156', 'uid4_D12_mean', 'C5',
'uid4_D12_std', 'id_30_fq_enc', 'V61', 'id_33', 'D15_to_std_addr1',
'bank_type_D9_mean',
#'D5_intercept',
'D10_DT_W_min_max', 'V130',
'bank_type_D9_std', 'uid5_D7_std', 'bank_type_D14_mean', 'bank_type_D3_std',
'bank_type_D5_mean', 'ProductCD', 'M8', 'V44', 'D6_fq_enc',
'D15_DT_D_min_max', 'D11_intercept_bin0', 'V257', 'bank_type_D7_mean', 'V76',
#'D15',
'V38', 'V55', 'V261', 'V149',
#'D4',
'D8_intercept_bin0', 'M2',
'bank_type_D6_std', 'id_30_version', 'D4_intercept_bin1',
'D15_to_mean_card4', 'V82', 'D3_DT_D_std_score', 'D10_intercept_bin3',
'bank_type_D2_std', 'V77', 'M7',
#'D11',
'D4_intercept_bin2', 'email_check',
'V294', 'V317', 'V308', 'id_33_fq_enc', 'bank_type_D5_std', 'D8_intercept',
'V62', 'V187', 'card5_TransactionAmt_mean', 'bank_type_D12_mean',
#'id_33_count_dist',
'D2_intercept_bin2', 'C10', 'V86', 'D8_DT_M_min_max',
'D15_intercept_bin4', 'D6_DT_W_std_score', 'uid5_D7_mean', 'C9_fq_enc',
'mxC10', 'D14_DT_W_std_score', 'card2_count_full', 'V258',
'bank_type_D14_std', 'D10_intercept_bin4', 'V83', 'bank_type_D13_std',
'D8_DT_W_min_max', 'TransactionAmt', 'V312', 'D14_intercept', 'id_33_1',
'D15_intercept_bin2', 'D12_DT_W_std_score', 'V78', 'D8_D9_decimal_dist',
'M9', 'V281', 'bank_type_D12_std', 'V54', 'C9', 'M4_target_mean',
'sum_Cxx_binary_higher_than_q90', 'D10_DT_D_min_max', 'bank_type_D3_mean',
'bank_type_D8_mean', 'R_emaildomain_prefix', 'bank_type_D6_mean', 'V314',
'D11_DT_W_std_score',
#'D10',
'D4_DT_D_min_max', 'V283', 'D10_intercept_bin2',
'D13_intercept', 'D8_DT_D_min_max', 'C2_fq_enc', 'V165', 'D1_intercept_bin4',
'bank_type_D13_mean',
#'D3_intercept',
'TransactionAmt_2Dec',
'card3_div_Mean_D9_DOY', 'C12', 'D4_DT_M_std_score', 'D2_intercept_bin1',
'mxC8', 'D2_fq_enc', 'addr1_third_digit', 'D4_fq_enc',
#'D1_fq_enc',
'mxC12',
'D8', 'D10_intercept_bin1', 'id_01', 'id_09', 'id_03', 'addr1_second_digit',
'D15_to_mean_addr1', 'sum_Cxx_binary_higher_than_q80', 'V53',
'TransactionAmt_decimal', 'card3_div_Mean_D6_DOY', 'D15_intercept_bin3',
'V45', 'id_02_to_std_card4', 'addr2_div_Mean_D10_DOY_productCD',
'DeviceInfo_version', 'DeviceInfo_device', 'D1_intercept_bin3',
'D11_intercept', 'DeviceInfo_version_fq_enc', 'C6', 'uid5_D13_std',
'TransactionAmt_DT_M_min_max', 'dist2', 'C8', 'D15_intercept_bin1', 'M3',
'R_emaildomain_fq_enc', 'DeviceInfo_device_fq_enc', 'D6_DT_D_std_score',
'sum_Cxx_binary_higher_than_q60',
#'D11__DeviceInfo',
'TranAmt_div_Mean_D12_DOY_productCD', 'D10_DT_M_std_score', 'uid5_D13_mean',
'mxC5',
#id_30',
'addr2_div_Mean_D4_DOY', 'uid2_D12_std', 'C11_fq_enc',
'id_06', 'uid2_D12_mean', 'sum_Cxx_binary_higher_than_q70', 'V310', 'V307',
'C6_fq_enc', 'D8_fq_enc', 'dist2_fq_enc', 'D2_intercept_bin0',
'addr1_div_Mean_D10_DOY_productCD', 'addr1_div_Mean_D10_DOY',
'addr1_div_Mean_D11_DOY', 'uid2_D8_std', 'id_02__id_20', 'V313',
'D4_intercept_bin0', 'D11_DT_D_std_score', 'Transaction_day_of_week',
'card6_div_Mean_D3_DOY', 'uid2_D1_std', 'uid5_D11_mean', 'uid_fq_enc',
'D14_DT_D_std_score', 'D12_DT_D_std_score', 'id_02_to_mean_card4',
'uid4_D13_std', 'D1_intercept_bin1', 'id_02_to_std_card1', 'uid5_D11_std',
'P_emaildomain_prefix', 'DT_day',
#'D8_DT_M_std_score',
'uid2_D1_mean',
'TransactionAmt_to_mean_card4', 'card5_div_Mean_D11_DOY',
'D15_DT_M_std_score', 'V87', 'uid_D12_std', 'id_31_device_fq_enc',
'uid2_D11_mean', 'card3_DT_W_week_day_dist_best', 'uid5_D14_std',
'uid2_D15_mean', 'sum_Cxx_binary_higher_than_q50', 'id_13',
'card3_div_Mean_D11_DOY', 'C11', 'bank_type_DT_W_week_day_dist_best',
'card4_div_Mean_D11_DOY', 'addr1_div_Mean_D1_DOY', 'uid2_D4_mean',
'card2_div_Mean_D11_DOY', 'C13_fq_enc', 'uid4_D13_mean',
'card5_DT_W_week_day_dist_best', 'id_02', 'uid5_D14_mean', 'uid2_D10_mean',
# 'id_01_count_dist',
'D13_DT_W_std_score', 'C2', 'C14',
'addr2_div_Mean_D10_DOY', 'uid2_D11_std', 'addr1_div_Mean_D1_DOY_productCD',
'id_02_to_mean_card1', 'dist1_fq_enc', 'card1_div_Mean_D11_DOY',
'D15_to_std_card1', 'TransactionAmt_DT_M_std_score', 'uid2_D6_std',
'TransactionAmt_to_std_card4', 'uid2_D15_std', 'uid3_D8_std',
'card6_div_Mean_D11_DOY', 'TranAmt_div_Mean_D14_DOY',
'card3_div_Mean_D14_DOY',
#'D2',
#'D1',
'uid_D15_mean', 'uid4_D6_std',
'uid_D15_std', 'D10_intercept_bin0', 'DeviceInfo_fq_enc', 'uid2_D13_std',
'uid_D12_mean', 'uid4_D6_mean', 'uid_D1_std', 'D1_intercept_bin2',
'uid_D10_mean', 'card2__id_20', 'uid4_D7_std', 'uid3_D13_std', 'C14_fq_enc',
'uid_D8_std', 'uid3_D13_mean', 'uid2_D4_std', 'addr1_div_Mean_D4_DOY',
'uid_D4_mean', 'D4_DT_W_std_score', 'addr2_div_Mean_D1_DOY_productCD',
'uid_D11_mean', 'D15_intercept_bin0', 'uid2_D10_std', 'uid_D13_std',
'uid2_fq_enc', 'uid2_D13_mean', 'uid2_D2_mean', 'D2_intercept',
'uid_D11_std', 'card2', 'uid4_D14_std', 'C_sum_after_clip75',
'R_emaildomain', 'dist1', 'id_05', 'uid_TransactionAmt_mean', 'uid_D1_mean',
'uid3_D1_std', 'uid5_D8_std', 'uid3_D6_std', 'Transaction_hour_of_day',
'uid4_D14_mean', 'uid5_D10_std', 'uid3_D10_std', 'uid5_D1_std',
'uid5_D15_std', 'uid2_D7_mean', 'uid3_D11_std', 'uid4_D8_std',
'D13_DT_D_std_score', 'uid3_D11_mean', 'uid2_D14_std', 'uid2_D7_std',
'uid2_D14_mean', 'uid_D13_mean', 'uid_D10_std', 'uid2_D3_std', 'uid_D6_std',
'uid3_D15_std', 'addr1_fq_enc',
#id_31',
'uid_TransactionAmt_std',
'card1_div_Mean_D4_DOY_productCD', 'uid2_TransactionAmt_mean',
'C_sum_after_clip90', 'uid2_TransactionAmt_std', 'uid4_D7_mean',
'uid2_D6_mean', 'uid3_D15_mean', 'D15_to_mean_card1', 'uid5_D15_mean', 'M4',
'uid3_D7_std', 'card2_div_Mean_D4_DOY', 'card5_div_Mean_D4_DOY_productCD',
'card5_div_Mean_D4_DOY', 'D4_intercept', 'uid_D4_std',
'card6_div_Mean_D4_DOY_productCD', 'card5__P_emaildomain', 'card1_fq_enc',
'uid5_D10_mean', 'card1_div_Mean_D4_DOY', 'C1', 'M6', 'uid2_D2_std',
'P_emaildomain_fq_enc', 'card1_TransactionAmt_mean', 'uid3_D10_mean',
'TransactionAmt_DT_W_min_max', 'uid5_D4_std',
'card1_div_Mean_D10_DOY_productCD', 'uid3_D1_mean', 'card1_div_Mean_D10_DOY',
'uid_D14_mean', 'mxC9', 'TranAmt_div_Mean_D4_DOY_productCD',
'D15_DT_W_std_score', 'DeviceInfo__P_emaildomain', 'uid3_D14_mean',
#'bank_type_DT_M',
'mxC11', 'uid5_D1_mean', 'uid_D2_mean',
'D10_DT_W_std_score',
#'card3_DT_M_month_day_dist_best',
'uid3_D2_std',
'TranAmt_div_Mean_D4_DOY', 'card1_TransactionAmt_std',
'card3_div_Mean_D4_DOY_productCD', 'D1_intercept_bin0', 'uid3_D4_std',
'card2_div_Mean_D10_DOY', 'uid_D2_std', 'uid3_D14_std', 'uid3_D4_mean',
'uid_D7_mean', 'uid5_D2_std', 'card4_div_Mean_D4_DOY_productCD',
'card6_div_Mean_D4_DOY', 'TranAmt_div_Mean_D10_DOY', 'uid2_D9_std',
'TransactionAmt_DT_W_std_score', 'C1_fq_enc', 'card1_div_Mean_D1_DOY',
'uid5_D4_mean', 'uid3_D6_mean', 'mxC14', 'uid5_D2_mean',
'card4_div_Mean_D4_DOY', 'card3_div_Mean_D4_DOY', 'uid_D14_std', 'M5', 'C13',
'mxC6', 'card5_div_Mean_D10_DOY_productCD',
# 'card3_DT_M_month_day_dist',
'card2_div_Mean_D10_DOY_productCD', 'uid_D7_std',
'card2_div_Mean_D4_DOY_productCD', 'bank_type_DT_M_month_day_dist',
'uid3_D7_mean', 'uid_D3_std', 'uid5_fq_enc', 'uid3_fq_enc', 'uid_D3_mean',
'D4_DT_D_std_score', 'uid3_D2_mean', 'uid4_D1_std', 'uid2_D5_std',
'uid4_D10_std', 'bank_type_DT_D_hour_dist_best', 'uid2_D8_mean',
'card6_div_Mean_D10_DOY_productCD', 'card1_div_Mean_D1_DOY_productCD',
'uid5_D9_std', 'card4_div_Mean_D10_DOY_productCD', 'uid2_D3_mean',
'uid_D6_mean', 'card2_div_Mean_D1_DOY', 'card5_div_Mean_D10_DOY', 'mxC2',
'card2_TransactionAmt_std', 'bank_type_DT_W_week_day_dist',
'card2_TransactionAmt_mean', 'uid4_D10_mean',
#id_31_count_dist',
'TranAmt_div_Mean_D1_DOY', 'uid3_D3_std', 'uid4_D15_std',
'card5_div_Mean_D1_DOY_productCD', 'card4_div_Mean_D10_DOY',
'card5_DT_D_hour_dist_best', 'uid4_D4_std', 'card5_DT_M_month_day_dist',
#'bank_type_DT_W',
'addr1__card1', 'bank_type_DT_M_month_day_dist_best',
'card2_div_Mean_D1_DOY_productCD', 'card6_div_Mean_D10_DOY', 'uid2_D5_mean',
'uid_DT_M', 'card2__dist1', 'uid2_D9_mean', 'card5_DT_M_month_day_dist_best',
'TranAmt_div_Mean_D10_DOY_productCD', 'uid4_D11_std', 'uid_D5_mean',
'uid5_D3_std', 'TransactionAmt_DT_D_std_score',
#'D8_DT_W_std_score',
'card5_DT_W_week_day_dist', 'uid5_D5_std', 'card3_DT_W_week_day_dist',
'uid4_D9_std', 'D10_intercept', 'uid3_D3_mean', 'uid4_D5_std', 'uid_D5_std',
'card5_div_Mean_D1_DOY', 'uid5_D3_mean', 'bank_type_DT_D', 'uid4_D1_mean',
'uid_D8_mean', 'uid3_D5_mean', 'D15_intercept', 'uid5_TransactionAmt_std',
'uid3_D5_std', 'uid4_D4_mean', 'uid4_D15_mean', 'uid5_D8_mean',
'uid5_D9_mean', 'uid_D9_std', 'uid_D9_mean', 'uid5_D5_mean', 'mtransamt',
'bank_type_DT_D_hour_dist', 'uid4_D11_mean', 'D15_DT_D_std_score',
'TransactionAmt_DT_D_min_max', 'uid4_D2_mean', 'ntrans',
'addr2_div_Mean_D1_DOY', 'uid5_TransactionAmt_mean', 'uid3_D9_std',
'TransactionAmt_Dec', 'uid3_TransactionAmt_std', 'card5_DT_D_hour_dist',
'card1', 'card4_div_Mean_D1_DOY_productCD', 'P_emaildomain__C2',
'card3_div_Mean_D10_DOY', 'uid4_D3_std', 'card3_DT_D_hour_dist_best',
'uid4_D8_mean', 'uid4_D2_std', 'card6_div_Mean_D1_DOY_productCD', 'uid_DT_W',
#'Sum_TransAmt_Day',
'uid4_D5_mean', 'card4_div_Mean_D1_DOY',
'card3_div_Mean_D10_DOY_productCD', 'uid3_D8_mean',
'TransactionAmt_userid_median', 'uid4_fq_enc', 'uid3_TransactionAmt_mean',
'uid3_D9_mean', 'card6_div_Mean_D1_DOY',
#'Trans_Count_Day',
'mxC1',
'D10_DT_D_std_score', 'card3_div_Mean_D1_DOY',
'TransactionAmt_to_mean_card1', 'card2_fq_enc', 'product_type',
'card3_div_Mean_D1_DOY_productCD', 'TransactionAmt_to_std_card1', 'uid_DT_D',
'uid4_D9_mean', 'D1_intercept', 'card3_DT_D_hour_dist',
'TranAmt_div_Mean_D1_DOY_productCD', 'product_type_DT_M', 'uid4_D3_mean',
'uid4_TransactionAmt_mean', 'uid4_TransactionAmt_std', 'D8_DT_D_std_score',
#'Mean_TransAmt_Day',
#'minDT',
'product_type_DT_W', 'mintransamt',
'maxtransamt', 'TransactionAmt_userid_std', 'P_emaildomain', 'card1__card5',
'product_type_DT_D', 'mxC13',
#'maxDT',
'id_19', 'DeviceInfo', 'id_20',
'addr1', 'userid_min_C1', 'userid_max_C1', 'userid_max_minus_min_C1',
'userid_unique_C1', 'userid_mean_C1', 'userid_min_C2', 'userid_max_C2',
'userid_max_minus_min_C2', 'userid_unique_C2', 'userid_mean_C2',
'userid_min_C3', 'userid_max_C3', 'userid_max_minus_min_C3',
'userid_unique_C3', 'userid_mean_C3', 'userid_min_C4', 'userid_max_C4',
'userid_max_minus_min_C4', 'userid_unique_C4', 'userid_mean_C4',
'userid_min_C5', 'userid_max_C5', 'userid_max_minus_min_C5',
'userid_unique_C5', 'userid_mean_C5', 'userid_min_C6', 'userid_max_C6',
'userid_max_minus_min_C6', 'userid_unique_C6', 'userid_mean_C6',
'userid_min_C7', 'userid_max_C7', 'userid_max_minus_min_C7',
'userid_unique_C7', 'userid_mean_C7', 'userid_min_C8', 'userid_max_C8',
'userid_max_minus_min_C8', 'userid_unique_C8', 'userid_mean_C8',
'userid_min_C9', 'userid_max_C9', 'userid_max_minus_min_C9',
'userid_unique_C9', 'userid_mean_C9', 'userid_min_C10', 'userid_max_C10',
'userid_max_minus_min_C10', 'userid_unique_C10', 'userid_mean_C10',
'userid_min_C11', 'userid_max_C11', 'userid_max_minus_min_C11',
'userid_unique_C11', 'userid_mean_C11', 'userid_min_C12', 'userid_max_C12',
'userid_max_minus_min_C12', 'userid_unique_C12', 'userid_mean_C12',
'userid_min_C13', 'userid_max_C13', 'userid_max_minus_min_C13',
'userid_unique_C13', 'userid_mean_C13', 'userid_min_C14', 'userid_max_C14',
'userid_max_minus_min_C14', 'userid_unique_C14', 'userid_mean_C14', 'hour',
'hour_sin',
#'week', 'week_sin', 'week_cos', 'month',
#'life_of_customer',
'addr1_broad_area', 'uid6_TransactionAmt_mean', 'uid6_TransactionAmt_std',
'hour_TransactionAmt_mean', 'hour_TransactionAmt_std',
# 'week_TransactionAmt_mean', 'week_TransactionAmt_std',
'D1_diff', 'D10_diff',
'D15_diff', 'new_identity_M5_mean', 'new_identity_M6_mean',
'new_identity_V315_mean', 'new_identity_D1_diff_mean',
'new_identity_D3_mean', 'new_identity_D10_diff_mean',
'new_identity_D15_diff_mean', 'addr1_addr2_new_identity_M5_mean_mean',
'addr1_addr2_new_identity_M5_mean_std',
'addr1_addr2_new_identity_M6_mean_mean',
'addr1_addr2_new_identity_M6_mean_std',
'addr1_addr2_new_identity_V315_mean_mean',
'addr1_addr2_new_identity_V315_mean_std',
'addr1_addr2_new_identity_D1_diff_mean_mean',
'addr1_addr2_new_identity_D1_diff_mean_std',
'addr1_addr2_new_identity_D10_diff_mean_mean',
'addr1_addr2_new_identity_D10_diff_mean_std',
'addr1_addr2_new_identity_D15_diff_mean_mean',
'addr1_addr2_new_identity_D15_diff_mean_std',
'new_identity_ProductCD_TransactionAmt_mean', 'uid6_C1_mean', 'uid6_C1_std',
'uid6_V54_mean', 'uid6_V54_std', 'uid6_V281_mean', 'uid6_V281_std',
'uid6_C11_mean', 'uid6_C11_std', 'uid6_D4_mean', 'uid6_D4_std',
'uid6_V67_mean', 'uid6_V67_std', 'uid6_V320_mean', 'uid6_V320_std',
'uid6_M5_mean', 'uid6_M5_std', 'uid6_M6_mean', 'uid6_M6_std',
'uid3_V67_mean', 'uid3_V67_std', 'uid3_V83_mean', 'uid3_V83_std',
'uid6_fq_enc', 'card4_fq_enc', 'card6_fq_enc', 'ProductCD_fq_enc',
'M4_fq_enc', 'addr_fq_enc', 'R_emaildomain_V118_mean',
'R_emaildomain_V118_std', 'R_emaildomain_V119_mean',
'R_emaildomain_V119_std', 'card1_V20_mean', 'card1_V20_std',
'card1_V151_mean', 'card1_V151_std', 'card1_V67_mean', 'card1_V67_std',
'hour_V116_mean', 'hour_V116_std']
train = train[FEATURES]
test = test[FEATURES]
train['target'] = 0
test['target'] = 1
train_test = pd.concat([train, test], axis =0)
target = train_test['target'].values
del train, test
gc.collect()
train, test = model_selection.train_test_split(train_test, test_size=0.33, random_state=529, shuffle=True)
train_y = train['target'].values
test_y = test['target'].values
del train['target'], test['target']
gc.collect()
train = lgb.Dataset(train, label=train_y)
test = lgb.Dataset(test, label=test_y)
param = {'num_leaves': 50,
'min_data_in_leaf': 30,
'objective':'binary',
'max_depth': 5,
'learning_rate': 0.2,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 44,
"metric": 'auc',
"verbosity": -1}
num_round = 500
clf = lgb.train(param,
train,
num_round,
valid_sets = [train, test],
verbose_eval=10,
early_stopping_rounds = 50)
feature_imp = pd.DataFrame(sorted(zip(clf.feature_importance(),FEATURES)), columns=['Value','Feature'])
plt.figure(figsize=(20, 10))
sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False).head(20))
plt.title('LightGBM Features')
plt.tight_layout()
plt.show()
#plt.savefig('lgbm_importances-01.png')
###Output
_____no_output_____
###Markdown
Make this into a loop
###Code
for i in tqdm(range(0, 300)):
top3_feats = feature_imp.sort_values('Value', ascending=False)['Feature'][:3].tolist()
print('Remove features:', top3_feats)
FEATURES = [x for x in FEATURES if x not in top3_feats]
train = pd.read_parquet('../../data/train_FE013.parquet')
test = pd.read_parquet('../../data/test_FE013.parquet')
train = train[FEATURES]
test = test[FEATURES]
train['target'] = 0
test['target'] = 1
train_test = pd.concat([train, test], axis =0)
target = train_test['target'].values
del train, test
gc.collect()
train, test = model_selection.train_test_split(train_test, test_size=0.33, random_state=529, shuffle=True)
train_y = train['target'].values
test_y = test['target'].values
del train['target'], test['target']
gc.collect()
train = lgb.Dataset(train, label=train_y)
test = lgb.Dataset(test, label=test_y)
param = {'num_leaves': 50,
'min_data_in_leaf': 30,
'objective':'binary',
'max_depth': 5,
'learning_rate': 0.2,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 44,
"metric": 'auc',
"verbosity": -1}
num_round = 500
clf = lgb.train(param,
train,
num_round,
valid_sets = [train, test],
verbose_eval=10,
early_stopping_rounds = 50)
feature_imp = pd.DataFrame(sorted(zip(clf.feature_importance(),FEATURES)), columns=['Value','Feature'])
###Output
0%| | 0/300 [00:00<?, ?it/s] |
vanishing_grad_example.ipynb | ###Markdown
Vanishing Gradients In this notebook, we will demonstrate the difference between using sigmoid and ReLU nonlinearities in a simple neural network with two hidden layers. This notebook is built off of a minimal net demo done by Andrej Karpathy for CS 231n, which you can check out here: http://cs231n.github.io/neural-networks-case-study/
###Code
# Setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# generate random data -- not linearly separable
np.random.seed(0)
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K, D))
num_train_examples = X.shape[0]
y = np.zeros(N*K, dtype='uint8')
for j in range(K):
ix = range(N*j, N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
###Output
_____no_output_____
###Markdown
The sigmoid function "squashes" inputs to lie between 0 and 1. Unfortunately, this means that for inputs with sigmoid output close to 0 or 1, the gradient with respect to those inputs are close to zero. This leads to the phenomenon of vanishing gradients, where gradients drop close to zero, and the net does not learn well.On the other hand, the relu function (max(0, x)) does not saturate with input size. Plot these functions to gain intution.
###Code
def sigmoid(x):
x = 1 / (1 + np.exp(-x))
return x
def sigmoid_grad(x):
return (x) * (1 - x)
def relu(x):
return np.maximum(0,x)
###Output
_____no_output_____
###Markdown
Let's try and see now how the two kinds of nonlinearities change deep neural net training in practice. Below, we build a very simple neural net with three layers (two hidden layers), for which you can swap out ReLU/ sigmoid nonlinearities.
###Code
#function to train a three layer neural net with either RELU or sigmoid nonlinearity via vanilla grad descent
def three_layer_net(NONLINEARITY, X, y, model, step_size, reg):
#parameter initialization
h = model['h']
h2= model['h2']
W1= model['W1']
W2= model['W2']
W3= model['W3']
b1= model['b1']
b2= model['b2']
b3= model['b3']
# some hyperparameters
# gradient descent loop
num_examples = X.shape[0]
plot_array_1=[]
plot_array_2=[]
for i in range(50000):
#FOWARD PROP
if NONLINEARITY== 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
#print(X.shape)
#print(scores.shape)
#print(np.sum(exp_scores, axis=1, keepdims=True).shape)
#print(probs.shape)
#assert False
# compute the loss: average cross-entropy loss and regularization
# v = probs[range(num_examples), y] -> 1d vector v[i] = probs[i, y[i]]]
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs) / num_examples
reg_loss = 0.5*reg*np.sum(W1*W1) + 0.5*reg*np.sum(W2*W2)+ 0.5*reg*np.sum(W3*W3)
loss = data_loss + reg_loss
if i % 1000 == 0:
print("iteration %d: loss %f" % (i, loss))
# compute the gradient on scores
dscores = probs
dscores[range(num_examples), y] -= 1
dscores /= num_examples
# BACKPROP HERE
dW3 = (hidden_layer2.T).dot(dscores)
db3 = np.sum(dscores, axis=0, keepdims=True)
if NONLINEARITY == 'RELU':
#backprop ReLU nonlinearity here
dhidden2 = np.dot(dscores, W3.T)
dhidden2[hidden_layer2 <= 0] = 0
dW2 = np.dot( hidden_layer.T, dhidden2)
plot_array_2.append(np.sum(np.abs(dW2)) / np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = np.dot(dhidden2, W2.T)
dhidden[hidden_layer <= 0] = 0
elif NONLINEARITY == 'SIGM':
#backprop sigmoid nonlinearity here
dhidden2 = dscores.dot(W3.T)*sigmoid_grad(hidden_layer2)
dW2 = (hidden_layer.T).dot(dhidden2)
plot_array_2.append(np.sum(np.abs(dW2))/np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = dhidden2.dot(W2.T)*sigmoid_grad(hidden_layer)
dW1 = np.dot(X.T, dhidden)
plot_array_1.append(np.sum(np.abs(dW1))/np.sum(np.abs(dW1.shape)))
db1 = np.sum(dhidden, axis=0)
# add regularization
dW3 += reg * W3
dW2 += reg * W2
dW1 += reg * W1
#option to return loss, grads -- uncomment next comment
grads={}
grads['W1']=dW1
grads['W2']=dW2
grads['W3']=dW3
grads['b1']=db1
grads['b2']=db2
grads['b3']=db3
#return loss, grads
# update
W1 += -step_size * dW1
b1 += -step_size * db1
W2 += -step_size * dW2
b2 += -step_size * db2
W3 += -step_size * dW3
b3 += -step_size * db3
# evaluate training set accuracy
if NONLINEARITY == 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
predicted_class = np.argmax(scores, axis=1)
print('training accuracy: %.2f' % (np.mean(predicted_class == y)))
#return cost, grads
return plot_array_1, plot_array_2, W1, W2, W3, b1, b2, b3
###Output
_____no_output_____
###Markdown
Train net with sigmoid nonlinearity first
###Code
#Initialize toy model, train sigmoid net
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
h=50
h2=50
num_train_examples = X.shape[0]
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(sigm_array_1, sigm_array_2, s_W1, s_W2,s_W3, s_b1, s_b2,s_b3) = three_layer_net('SIGM', X,y,model, step_size=1e-1, reg=1e-3)
###Output
iteration 0: loss 1.156405
iteration 1000: loss 1.100737
iteration 2000: loss 0.999698
iteration 3000: loss 0.855495
iteration 4000: loss 0.819427
iteration 5000: loss 0.814825
iteration 6000: loss 0.810526
iteration 7000: loss 0.805943
iteration 8000: loss 0.800688
iteration 9000: loss 0.793976
iteration 10000: loss 0.783201
iteration 11000: loss 0.759909
iteration 12000: loss 0.719792
iteration 13000: loss 0.683194
iteration 14000: loss 0.655847
iteration 15000: loss 0.634996
iteration 16000: loss 0.618527
iteration 17000: loss 0.602246
iteration 18000: loss 0.579710
iteration 19000: loss 0.546264
iteration 20000: loss 0.512831
iteration 21000: loss 0.492403
iteration 22000: loss 0.481854
iteration 23000: loss 0.475923
iteration 24000: loss 0.472031
iteration 25000: loss 0.469086
iteration 26000: loss 0.466611
iteration 27000: loss 0.464386
iteration 28000: loss 0.462306
iteration 29000: loss 0.460319
iteration 30000: loss 0.458398
iteration 31000: loss 0.456528
iteration 32000: loss 0.454697
iteration 33000: loss 0.452900
iteration 34000: loss 0.451134
iteration 35000: loss 0.449398
iteration 36000: loss 0.447699
iteration 37000: loss 0.446047
iteration 38000: loss 0.444457
iteration 39000: loss 0.442944
iteration 40000: loss 0.441523
iteration 41000: loss 0.440204
iteration 42000: loss 0.438994
iteration 43000: loss 0.437891
iteration 44000: loss 0.436891
iteration 45000: loss 0.435985
iteration 46000: loss 0.435162
iteration 47000: loss 0.434412
iteration 48000: loss 0.433725
iteration 49000: loss 0.433092
training accuracy: 0.97
###Markdown
Now train net with ReLU nonlinearity
###Code
#Re-initialize model, train relu net
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(relu_array_1, relu_array_2, r_W1, r_W2,r_W3, r_b1, r_b2,r_b3) = three_layer_net('RELU', X,y,model, step_size=1e-1, reg=1e-3)
###Output
iteration 0: loss 1.116188
iteration 1000: loss 0.275047
iteration 2000: loss 0.152297
iteration 3000: loss 0.136370
iteration 4000: loss 0.130853
iteration 5000: loss 0.127878
iteration 6000: loss 0.125951
iteration 7000: loss 0.124599
iteration 8000: loss 0.123502
iteration 9000: loss 0.122594
iteration 10000: loss 0.121833
iteration 11000: loss 0.121202
iteration 12000: loss 0.120650
iteration 13000: loss 0.120165
iteration 14000: loss 0.119734
iteration 15000: loss 0.119345
iteration 16000: loss 0.119000
iteration 17000: loss 0.118696
iteration 18000: loss 0.118423
iteration 19000: loss 0.118166
iteration 20000: loss 0.117932
iteration 21000: loss 0.117718
iteration 22000: loss 0.117521
iteration 23000: loss 0.117337
iteration 24000: loss 0.117168
iteration 25000: loss 0.117011
iteration 26000: loss 0.116863
iteration 27000: loss 0.116721
iteration 28000: loss 0.116574
iteration 29000: loss 0.116427
iteration 30000: loss 0.116293
iteration 31000: loss 0.116164
iteration 32000: loss 0.116032
iteration 33000: loss 0.115905
iteration 34000: loss 0.115783
iteration 35000: loss 0.115669
iteration 36000: loss 0.115560
iteration 37000: loss 0.115454
iteration 38000: loss 0.115356
iteration 39000: loss 0.115264
iteration 40000: loss 0.115177
iteration 41000: loss 0.115094
iteration 42000: loss 0.115014
iteration 43000: loss 0.114937
iteration 44000: loss 0.114861
iteration 45000: loss 0.114787
iteration 46000: loss 0.114716
iteration 47000: loss 0.114648
iteration 48000: loss 0.114583
iteration 49000: loss 0.114522
training accuracy: 0.99
###Markdown
The Vanishing Gradient Issue We can use the sum of the magnitude of gradients for the weights between hidden layers as a cheap heuristic to measure speed of learning (you can also use the magnitude of gradients for each neuron in the hidden layer here). Intuitevely, when the magnitude of the gradients of the weight vectors or of each neuron are large, the net is learning faster. (NOTE: For our net, each hidden layer has the same number of neurons. If you want to play around with this, make sure to adjust the heuristic to account for the number of neurons in the layer).
###Code
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- SIGM weights')
plt.legend(("sigm first layer", "sigm second layer"))
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.title('Sum of magnitudes of gradients -- ReLU weights')
plt.legend(("relu first layer", "relu second layer"))
# Overlaying the two plots to compare
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- hidden layer neurons')
plt.legend(("relu first layer", "relu second layer","sigm first layer", "sigm second layer"))
###Output
_____no_output_____
###Markdown
Feel free to play around with this notebook to gain intuition. Things you might want to try:- Adding additional layers to the nets and seeing how early layers continue to train slowly for the sigmoid net - Experiment with hyperparameter tuning for the nets -- changing regularization and gradient descent step size- Experiment with different nonlinearities -- Leaky ReLU, Maxout. How quickly do different layers learn now? We can see how well each classifier does in terms of distinguishing the toy data classes. As expected, since the ReLU net trains faster, for a set number of epochs it performs better compared to the sigmoid net.
###Code
# plot the classifiers- SIGMOID
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(sigmoid(np.dot(sigmoid(np.dot(np.c_[xx.ravel(), yy.ravel()], s_W1)
+ s_b1), s_W2) + s_b2), s_W3) + s_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# plot the classifiers-- RELU
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], r_W1)
+ r_b1), r_W2) + r_b2), r_W3) + r_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
###Output
_____no_output_____
###Markdown
Vanishing Gradients In this notebook, we will demonstrate the difference between using sigmoid and ReLU nonlinearities in a simple neural network with two hidden layers. This notebook is built off of a minimal net demo done by Andrej Karpathy for CS 231n, which you can check out here: http://cs231n.github.io/neural-networks-case-study/
###Code
# Setup
import numpy as np
import matplotlib.pyplot as plt
from past.builtins import xrange
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
#generate random data -- not linearly separable
np.random.seed(0)
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D))
num_train_examples = X.shape[0]
y = np.zeros(N*K, dtype='uint8')
for j in xrange(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
###Output
_____no_output_____
###Markdown
The sigmoid function "squashes" inputs to lie between 0 and 1. Unfortunately, this means that for inputs with sigmoid output close to 0 or 1, the gradient with respect to those inputs are close to zero. This leads to the phenomenon of vanishing gradients, where gradients drop close to zero, and the net does not learn well.On the other hand, the relu function (max(0, x)) does not saturate with input size. Plot these functions to gain intution.
###Code
def sigmoid(x):
x = 1/(1+np.exp(-x))
return x
def sigmoid_grad(x):
return (x)*(1-x)
def relu(x):
return np.maximum(0,x)
###Output
_____no_output_____
###Markdown
Let's try and see now how the two kinds of nonlinearities change deep neural net training in practice. Below, we build a very simple neural net with three layers (two hidden layers), for which you can swap out ReLU/ sigmoid nonlinearities.
###Code
#function to train a three layer neural net with either RELU or sigmoid nonlinearity via vanilla grad descent
def three_layer_net(NONLINEARITY,X,y, model, step_size, reg):
#parameter initialization
h= model['h']
h2= model['h2']
W1= model['W1']
W2= model['W2']
W3= model['W3']
b1= model['b1']
b2= model['b2']
b3= model['b3']
# some hyperparameters
# gradient descent loop
num_examples = X.shape[0]
plot_array_1=[]
plot_array_2=[]
for i in xrange(50000):
#FOWARD PROP
if NONLINEARITY== 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W1*W1) + 0.5*reg*np.sum(W2*W2)+ 0.5*reg*np.sum(W3*W3)
loss = data_loss + reg_loss
if i % 1000 == 0:
print ("iteration %d: loss %f" % (i, loss))
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# BACKPROP HERE
dW3 = (hidden_layer2.T).dot(dscores)
db3 = np.sum(dscores, axis=0, keepdims=True)
if NONLINEARITY == 'RELU':
#backprop ReLU nonlinearity here
dhidden2 = np.dot(dscores, W3.T)
dhidden2[hidden_layer2 <= 0] = 0
dW2 = np.dot( hidden_layer.T, dhidden2)
plot_array_2.append(np.sum(np.abs(dW2))/np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = np.dot(dhidden2, W2.T)
dhidden[hidden_layer <= 0] = 0
elif NONLINEARITY == 'SIGM':
#backprop sigmoid nonlinearity here
dhidden2 = dscores.dot(W3.T)*sigmoid_grad(hidden_layer2)
dW2 = (hidden_layer.T).dot(dhidden2)
plot_array_2.append(np.sum(np.abs(dW2))/np.sum(np.abs(dW2.shape)))
db2 = np.sum(dhidden2, axis=0)
dhidden = dhidden2.dot(W2.T)*sigmoid_grad(hidden_layer)
dW1 = np.dot(X.T, dhidden)
plot_array_1.append(np.sum(np.abs(dW1))/np.sum(np.abs(dW1.shape)))
db1 = np.sum(dhidden, axis=0)
# add regularization
dW3+= reg * W3
dW2 += reg * W2
dW1 += reg * W1
#option to return loss, grads -- uncomment next comment
grads={}
grads['W1']=dW1
grads['W2']=dW2
grads['W3']=dW3
grads['b1']=db1
grads['b2']=db2
grads['b3']=db3
#return loss, grads
# update
W1 += -step_size * dW1
b1 += -step_size * db1
W2 += -step_size * dW2
b2 += -step_size * db2
W3 += -step_size * dW3
b3 += -step_size * db3
# evaluate training set accuracy
if NONLINEARITY == 'RELU':
hidden_layer = relu(np.dot(X, W1) + b1)
hidden_layer2 = relu(np.dot(hidden_layer, W2) + b2)
elif NONLINEARITY == 'SIGM':
hidden_layer = sigmoid(np.dot(X, W1) + b1)
hidden_layer2 = sigmoid(np.dot(hidden_layer, W2) + b2)
scores = np.dot(hidden_layer2, W3) + b3
predicted_class = np.argmax(scores, axis=1)
print ('training accuracy: %.2f' % (np.mean(predicted_class == y)))
#return cost, grads
return plot_array_1, plot_array_2, W1, W2, W3, b1, b2, b3
###Output
_____no_output_____
###Markdown
Train net with sigmoid nonlinearity first
###Code
#Initialize toy model, train sigmoid net
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
h=50
h2=50
num_train_examples = X.shape[0]
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(sigm_array_1, sigm_array_2, s_W1, s_W2,s_W3, s_b1, s_b2,s_b3) = three_layer_net('SIGM', X,y,model, step_size=1e-1, reg=1e-3)
###Output
iteration 0: loss 1.113935
iteration 1000: loss 1.095640
iteration 2000: loss 0.937046
iteration 3000: loss 0.842175
iteration 4000: loss 0.819177
iteration 5000: loss 0.815092
iteration 6000: loss 0.811192
iteration 7000: loss 0.806774
iteration 8000: loss 0.801156
iteration 9000: loss 0.792839
iteration 10000: loss 0.776796
iteration 11000: loss 0.734320
iteration 12000: loss 0.653909
iteration 13000: loss 0.586465
iteration 14000: loss 0.545581
iteration 15000: loss 0.519612
iteration 16000: loss 0.501965
iteration 17000: loss 0.489176
iteration 18000: loss 0.479472
iteration 19000: loss 0.472165
iteration 20000: loss 0.466755
iteration 21000: loss 0.462640
iteration 22000: loss 0.459332
iteration 23000: loss 0.456526
iteration 24000: loss 0.454039
iteration 25000: loss 0.451761
iteration 26000: loss 0.449632
iteration 27000: loss 0.447616
iteration 28000: loss 0.445697
iteration 29000: loss 0.443872
iteration 30000: loss 0.442148
iteration 31000: loss 0.440552
iteration 32000: loss 0.439117
iteration 33000: loss 0.437857
iteration 34000: loss 0.436762
iteration 35000: loss 0.435813
iteration 36000: loss 0.434990
iteration 37000: loss 0.434276
iteration 38000: loss 0.433656
iteration 39000: loss 0.433113
iteration 40000: loss 0.432633
iteration 41000: loss 0.432203
iteration 42000: loss 0.431815
iteration 43000: loss 0.431461
iteration 44000: loss 0.431136
iteration 45000: loss 0.430834
iteration 46000: loss 0.430552
iteration 47000: loss 0.430288
iteration 48000: loss 0.430038
iteration 49000: loss 0.429800
training accuracy: 0.97
###Markdown
Now train net with ReLU nonlinearity
###Code
#Re-initialize model, train relu net
model={}
model['h'] = h # size of hidden layer 1
model['h2']= h2# size of hidden layer 2
model['W1']= 0.1 * np.random.randn(D,h)
model['b1'] = np.zeros((1,h))
model['W2'] = 0.1 * np.random.randn(h,h2)
model['b2']= np.zeros((1,h2))
model['W3'] = 0.1 * np.random.randn(h2,K)
model['b3'] = np.zeros((1,K))
(relu_array_1, relu_array_2, r_W1, r_W2,r_W3, r_b1, r_b2,r_b3) = three_layer_net('RELU', X,y,model, step_size=1e-1, reg=1e-3)
###Output
iteration 0: loss 1.108852
iteration 1000: loss 0.294098
iteration 2000: loss 0.154213
iteration 3000: loss 0.137443
iteration 4000: loss 0.131893
iteration 5000: loss 0.129002
iteration 6000: loss 0.126939
iteration 7000: loss 0.125329
iteration 8000: loss 0.124004
iteration 9000: loss 0.122933
iteration 10000: loss 0.122061
iteration 11000: loss 0.121325
iteration 12000: loss 0.120712
iteration 13000: loss 0.120180
iteration 14000: loss 0.119721
iteration 15000: loss 0.119317
iteration 16000: loss 0.118950
iteration 17000: loss 0.118623
iteration 18000: loss 0.118329
iteration 19000: loss 0.118064
iteration 20000: loss 0.117822
iteration 21000: loss 0.117599
iteration 22000: loss 0.117389
iteration 23000: loss 0.117185
iteration 24000: loss 0.116932
iteration 25000: loss 0.116696
iteration 26000: loss 0.116483
iteration 27000: loss 0.116247
iteration 28000: loss 0.116017
iteration 29000: loss 0.115807
iteration 30000: loss 0.115613
iteration 31000: loss 0.115427
iteration 32000: loss 0.115245
iteration 33000: loss 0.115068
iteration 34000: loss 0.114904
iteration 35000: loss 0.114738
iteration 36000: loss 0.114584
iteration 37000: loss 0.114437
iteration 38000: loss 0.114285
iteration 39000: loss 0.114137
iteration 40000: loss 0.113997
iteration 41000: loss 0.113859
iteration 42000: loss 0.113718
iteration 43000: loss 0.113565
iteration 44000: loss 0.113406
iteration 45000: loss 0.113261
iteration 46000: loss 0.113121
iteration 47000: loss 0.112987
iteration 48000: loss 0.112858
iteration 49000: loss 0.112733
training accuracy: 0.99
###Markdown
The Vanishing Gradient Issue We can use the sum of the magnitude of gradients for the weights between hidden layers as a cheap heuristic to measure speed of learning (you can also use the magnitude of gradients for each neuron in the hidden layer here). Intuitevely, when the magnitude of the gradients of the weight vectors or of each neuron are large, the net is learning faster. (NOTE: For our net, each hidden layer has the same number of neurons. If you want to play around with this, make sure to adjust the heuristic to account for the number of neurons in the layer).
###Code
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- SIGM weights')
plt.legend(("sigm first layer", "sigm second layer"))
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.title('Sum of magnitudes of gradients -- ReLU weights')
plt.legend(("relu first layer", "relu second layer"))
# Overlaying the two plots to compare
plt.plot(np.array(relu_array_1))
plt.plot(np.array(relu_array_2))
plt.plot(np.array(sigm_array_1))
plt.plot(np.array(sigm_array_2))
plt.title('Sum of magnitudes of gradients -- hidden layer neurons')
plt.legend(("relu first layer", "relu second layer","sigm first layer", "sigm second layer"))
###Output
_____no_output_____
###Markdown
Feel free to play around with this notebook to gain intuition. Things you might want to try:- Adding additional layers to the nets and seeing how early layers continue to train slowly for the sigmoid net - Experiment with hyperparameter tuning for the nets -- changing regularization and gradient descent step size- Experiment with different nonlinearities -- Leaky ReLU, Maxout. How quickly do different layers learn now? We can see how well each classifier does in terms of distinguishing the toy data classes. As expected, since the ReLU net trains faster, for a set number of epochs it performs better compared to the sigmoid net.
###Code
# plot the classifiers- SIGMOID
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(sigmoid(np.dot(sigmoid(np.dot(np.c_[xx.ravel(), yy.ravel()], s_W1) + s_b1), s_W2) + s_b2), s_W3) + s_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# plot the classifiers-- RELU
h = 0.02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = np.dot(relu(np.dot(relu(np.dot(np.c_[xx.ravel(), yy.ravel()], r_W1) + r_b1), r_W2) + r_b2), r_W3) + r_b3
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
###Output
_____no_output_____ |
section-04-research-and-development/titanic-assignment/03-titanic-survival-pipeline-assignment_leo.ipynb | ###Markdown
Predicting Survival on the Titanic HistoryPerhaps one of the most infamous shipwrecks in history, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 people on board. Interestingly, by analysing the probability of survival based on few attributes like gender, age, and social status, we can make very accurate predictions on which passengers would survive. Some groups of people were more likely to survive than others, such as women, children, and the upper-class. Therefore, we can learn about the society priorities and privileges at the time. Assignment:Build a Machine Learning Pipeline, to engineer the features in the data set and predict who is more likely to Survive the catastrophe.Follow the Jupyter notebook below, and complete the missing bits of code, to achieve each one of the pipeline steps.
###Code
import re
# to handle datasets
import pandas as pd
import numpy as np
# for visualization
import matplotlib.pyplot as plt
# to divide train and test set
from sklearn.model_selection import train_test_split
# feature scaling
from sklearn.preprocessing import StandardScaler
# to build the models
from sklearn.linear_model import LogisticRegression
# to evaluate the models
from sklearn.metrics import accuracy_score, roc_auc_score
# to persist the model and the scaler
import joblib
# ========== NEW IMPORTS ========
# Respect to notebook 02-Predicting-Survival-Titanic-Solution
# pipeline
from sklearn.pipeline import Pipeline
# for the preprocessors
from sklearn.base import BaseEstimator, TransformerMixin
# for imputation
from feature_engine.imputation import (
CategoricalImputer,
AddMissingIndicator,
MeanMedianImputer)
# for encoding categorical variables
from feature_engine.encoding import (
RareLabelEncoder,
OneHotEncoder
)
###Output
_____no_output_____
###Markdown
Prepare the data set
###Code
# load the data - it is available open source and online
data = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
# display data
data.head()
# replace interrogation marks by NaN values
data = data.replace('?', np.nan)
# retain only the first cabin if more than
# 1 are available per passenger
def get_first_cabin(row):
try:
return row.split()[0]
except:
return np.nan
data['cabin'] = data['cabin'].apply(get_first_cabin)
# extracts the title (Mr, Ms, etc) from the name variable
def get_title(passenger):
line = passenger
if re.search('Mrs', line):
return 'Mrs'
elif re.search('Mr', line):
return 'Mr'
elif re.search('Miss', line):
return 'Miss'
elif re.search('Master', line):
return 'Master'
else:
return 'Other'
data['title'] = data['name'].apply(get_title)
# cast numerical variables as floats
data['fare'] = data['fare'].astype('float')
data['age'] = data['age'].astype('float')
# drop unnecessary variables
data.drop(labels=['name','ticket', 'boat', 'body','home.dest'], axis=1, inplace=True)
# display data
data.head()
# # save the data set
# data.to_csv('titanic.csv', index=False)
###Output
_____no_output_____
###Markdown
Begin Assignment Configuration
###Code
# list of variables to be used in the pipeline's transformers
NUMERICAL_VARIABLES = ['age', 'fare']
CATEGORICAL_VARIABLES = ['sex', 'cabin', 'embarked', 'title']
CABIN = ['cabin']
###Output
_____no_output_____
###Markdown
Separate data into train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(
data.drop('survived', axis=1), # predictors
data['survived'], # target
test_size=0.2, # percentage of obs in test set
random_state=0) # seed to ensure reproducibility
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Preprocessors Class to extract the letter from the variable Cabin
###Code
class ExtractLetterTransformer(BaseEstimator, TransformerMixin):
# Extract fist letter of variable
def __init__(self, variables):
if not isinstance(variables, list):
raise ValueError('variables should be a list')
self.variables = variables
def fit(self, X, y=None):
# we need this step to fit the sklearn pipeline
return self
def transform(self, X):
# so that we do not over-write the original dataframe
X = X.copy()
for feature in self.variables:
X[feature] = X[feature].str[0]
return X
###Output
_____no_output_____
###Markdown
Pipeline- Impute categorical variables with string missing- Add a binary missing indicator to numerical variables with missing data- Fill NA in original numerical variable with the median- Extract first letter from cabin- Group rare Categories- Perform One hot encoding- Scale features with standard scaler- Fit a Logistic regression
###Code
# set up the pipeline
titanic_pipe = Pipeline([
# ===== IMPUTATION =====
# impute categorical variables with string missing
('categorical_imputation', CategoricalImputer(
imputation_method='missing', variables=CATEGORICAL_VARIABLES)),
# add missing indicator to numerical variables
('missing_indicator', AddMissingIndicator(variables=NUMERICAL_VARIABLES)),
# impute numerical variables with the median
('median_imputation', MeanMedianImputer(
imputation_method='median', variables=NUMERICAL_VARIABLES)),
# Extract letter from cabin
('extract_letter', ExtractLetterTransformer(variables=CABIN)),
# == CATEGORICAL ENCODING ======
# remove categories present in less than 5% of the observations (0.05)
# group them in one category called 'Rare'
('rare_label_encoder', RareLabelEncoder(
tol=0.05, n_categories=1, variables=CATEGORICAL_VARIABLES)),
# encode categorical variables using one hot encoding into k-1 variables
('categorical_encoder', OneHotEncoder(
drop_last=True, variables=CATEGORICAL_VARIABLES)),
# scale
('scaler', StandardScaler()),
('Logit', LogisticRegression(C=0.0005, random_state=0)),
])
# train the pipeline
titanic_pipe.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Make predictions and evaluate model performanceDetermine:- roc-auc- accuracy**Important, remember that to determine the accuracy, you need the outcome 0, 1, referring to survived or not. But to determine the roc-auc you need the probability of survival.**
###Code
# make predictions for train set
class_ = titanic_pipe.predict(X_train)
pred = titanic_pipe.predict_proba(X_train)[:,1]
# determine mse and rmse
print('train roc-auc: {}'.format(roc_auc_score(y_train, pred)))
print('train accuracy: {}'.format(accuracy_score(y_train, class_)))
print()
# make predictions for test set
class_ = titanic_pipe.predict(X_test)
pred = titanic_pipe.predict_proba(X_test)[:,1]
# determine mse and rmse
print('test roc-auc: {}'.format(roc_auc_score(y_test, pred)))
print('test accuracy: {}'.format(accuracy_score(y_test, class_)))
print()
###Output
train roc-auc: 0.8450386398763523
train accuracy: 0.7220630372492837
test roc-auc: 0.8354629629629629
test accuracy: 0.7137404580152672
|
JHMDB/jhmdb_openpose.ipynb | ###Markdown
Initialize the setting
###Code
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"]="0"
random.seed(123)
# directory that contains pickle files
data_dir = os.path.join(os.path.abspath(''), '..', 'data', 'openpose_zeros_all')
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def data_generator(T, C, le):
"""
Generate X (list of arrays) and Y (array) from a dict
"""
X = T['pose'] # list of arrays
Y = np.zeros(shape=(len(T['label']), C.clc_num)) # 2D array one-hot encoding of labels
Y[range(Y.shape[0]), le.transform(T['label'])] = 1
return X, Y
# helper functions for plotting
# history is a history object from keras
def plot_accuracy(history):
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
def plot_loss(history):
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Load and Preprocess Data
###Code
Train = pickle.load(open(os.path.join(data_dir, "GT_train_1.pkl"), "rb"))
Test = pickle.load(open(os.path.join(data_dir, "GT_test_1.pkl"), "rb"))
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(Train['label'])
C = ddnet.DDNetConfig(frame_length=32, num_joints=25, joint_dim=2, num_classes=21, num_filters=32)
X, Y = data_generator(Train,C,le)
X_test,Y_test = data_generator(Test,C,le)
print(len(X), X[0].shape, Y.shape)
print(len(X_test), X_test[0].shape, Y_test.shape)
###Output
497 (40, 25, 2) (497, 21)
195 (40, 25, 2) (195, 21)
###Markdown
Convert Invisible Joints with `nan`
###Code
def make_nan(p, copy=True):
"""
Convert 0 values to np.nan
"""
assert isinstance(p, np.ndarray)
q = p.copy() if copy else p
q[q == 0] = np.nan
return q
def has_nan(p):
assert isinstance(p, np.ndarray)
return np.isnan(p).any()
def count_nan(p):
assert isinstance(p, np.ndarray)
return np.isnan(p).sum()
X_nan = list(map(make_nan, X))
X_test_nan = list(map(make_nan, X_test))
print("Video without any nan: {} out of {}".format(len([p for p in X_nan if not has_nan(p)]), len(X_nan)))
print("nan entries in X_nan: {} out of {}".format(sum(map(count_nan, X_nan)), sum([p.size for p in X_nan])))
###Output
Video without any nan: 4 out of 497
nan entries in X_nan: 270780 out of 923200
###Markdown
Preprocessing* Select a subset of frequently-detected joints* Temporal interpolate* Fill the others with mean
###Code
def find_top_joints(X_nan, top=15):
"""
Find the indices of the `top` most frequently-detected joints """
count_nan_per_joint = np.array([sum([count_nan(p[:, j, :]) for p in X_nan]) for j in range(X_nan[0].shape[1])])
print(count_nan_per_joint)
# print(np.sort(count_nan_per_joint))
good_joint_idx = np.argsort(count_nan_per_joint)[:top]
return good_joint_idx
good_joint_idx = find_top_joints(X_nan)
print("Good joint indices:", sorted(good_joint_idx.tolist()))
# note: the most frequently visible joints are not the same for train and test
test_good_joint_idx = find_top_joints(X_test_nan)
print("Good joint indices of test set: ", sorted(test_good_joint_idx.tolist()))
HAND_PICKED_GOOD_JOINTS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16]
good_joint_idx = HAND_PICKED_GOOD_JOINTS
def filter_joints(p, good_joint_idx):
"""
Filter a point by only keeping joints in good_joint_idx
"""
return p[:, good_joint_idx, :]
X_topj = [filter_joints(p, good_joint_idx) for p in X_nan]
X_test_topj = [filter_joints(p, good_joint_idx) for p in X_test_nan]
print("Video with nan before/after selecting top joints: {} / {}".format(
sum(map(has_nan, X_nan)),
sum(map(has_nan, X_topj))
))
print("nan entries in before/after selecting top joints: {} / {}. Total {}".format(
sum(map(count_nan, X_nan)),
sum(map(count_nan, X_topj)),
sum([p.size for p in X_topj])
))
def nan_helper(y):
"""Helper function to handle real indices and logical indices of NaNs.
Input:
- y, 1d numpy array with possible NaNs
Output:
- nans, logical indices of NaNs
- index, a function, with signature indices= index(logical_indices),
to convert logical indices of NaNs to 'equivalent' indices
Example:
>>> # linear interpolation of NaNs
>>> nans, x= nan_helper(y)
>>> y[nans]= np.interp(x(nans), x(~nans), y[~nans])
"""
return np.isnan(y), lambda z: z.nonzero()[0]
def temporal_interp(p):
"""
If a joint is detected in at least one frame in a video,
we interpolate the nan coordinates from other frames.
This is done independently for each joint.
Note: it can still leave some all-nan columns if a joint is never detected in any frame.
"""
p = p.copy()
for j in range(p.shape[1]): # joint
for coord in range(p.shape[2]): # x, y (,z)
view = p[:, j, coord]
if np.isnan(view).all() or not np.isnan(view).any():
continue
nans, idx = nan_helper(view)
view[nans]= np.interp(idx(nans), idx(~nans), view[~nans])
return p
X_interp = list(map(temporal_interp, X_topj))
X_test_interp = list(map(temporal_interp, X_test_topj))
print("Video with nan before/after temporal interp: {} / {}".format(
sum(map(has_nan, X_topj)),
sum(map(has_nan, X_interp))
))
print("nan entries in before/after temporal interp: {} / {}".format(
sum(map(count_nan, X_topj)),
sum(map(count_nan, X_interp))
))
def per_video_normalize(p, copy=True):
"""
For x,y[, z] independently:
Normalize into between -0.5~0.5
"""
q = p.copy() if copy else p
for coord in range(p.shape[2]):
view = q[:, :, coord]
a, b = np.nanmin(view), np.nanmax(view)
# q[:,:, coord] = (view - np.mean(view)) / np.std(view)
view[:] = ((view - a) / (b-a)) - 0.5
return q
X_norm = [per_video_normalize(p) for p in X_interp]
X_test_norm = [per_video_normalize(p) for p in X_test_interp]
# print(X_norm[0][:,:,1])
# print(X_test_norm[0][:,1,1])
# fill in the remaining nans
# def per_frame_fill_mean(p, copy=True):
# """
# For each frame independently:
# for x, y[, z] independently:
# Fill nan entries with the mean of all other joints' coordinates
# This is defnitely not perfect, but may help.
# """
# q = p.copy() if copy else p
# for f in range(q.shape[0]):
# for coord in range(q.shape[2]): # x,y
# view = q[f, :, coord]
# view[np.isnan(view)] = np.nanmean(view)
# return q
def fill_nan_random(p, copy=True, sigma=.5):
"""
Fill nan values with normal distribution
"""
q = p.copy() if copy else p
q[np.isnan(q)] = np.random.randn(np.count_nonzero(np.isnan(q))) * sigma
return q
def fill_nan_uniform(p, copy=True, a=-0.5, b=0.5):
"""
Fill nan values with normal distribution
"""
q = p.copy() if copy else p
q[np.isnan(q)] = np.random.random((np.count_nonzero(np.isnan(q)),)) * (b-a) + a
return q
def fill_nan_bottom(p, copy=True):
q = p.copy() if copy else p
xview = q[:,:,0]
xview[np.isnan(xview)] = 0.
yview = q[:,:,1]
yview[np.isnan(yview)] = 0.5
return q
# def fill_nan_col_random(p, copy=True, sigma=.1)
# """
# Fill each nan column with the same value drawn from normal distribution
# """
# q = p.copy() if copy else p
# for j in range(q.shape[1]):
# for coord in range(q.shape[2]):
# view = q[:, j, coord]
# if np.all(np.isnan(view)):
# view[:] = np.random.randn(1) * sigma
# return q
def augment_nan(X, Y, num=5):
"""
Data augmentation.
X is a list of arrays.
"""
Xa = []
Ya = []
for p1, y1 in zip(X, Y):
Xa.extend([fill_nan_uniform(p1) for _ in range(num)])
Ya.extend([y1] * num)
Ya = np.stack(Ya)
assert len(Xa) == Ya.shape[0]
return Xa, Ya
X_aug, Y_aug = augment_nan(X_norm, Y)
print(len(X_aug), Y_aug.shape)
# X_fillnan = [fill_nan_random(p, sigma=0.5) for p in X_norm]
X_test_fillnan = [fill_nan_random(p, sigma=0.0) for p in X_test_norm]
print(X_norm[0][0], "\n\n", X_aug[0][0], "\n\n", X_aug[1][1])
# assert not any (map(has_nan, X_fillnan))
# X_fillnan[0][:,:,1]
X_input, Y_input = X_aug, Y_aug
X_test_input, Y_test_input = X_test_fillnan, Y_test
print(X_input[0].shape)
###Output
(40, 15, 2)
###Markdown
DDNet's preprocess and config
###Code
# redefine config with new # of joints
C = ddnet.DDNetConfig(frame_length=32, num_joints=len(good_joint_idx), joint_dim=2, num_classes=21, num_filters=32)
X_0, X_1 = ddnet.preprocess_batch(X_input, C)
X_test_0, X_test_1 = ddnet.preprocess_batch(X_test_input, C)
###Output
_____no_output_____
###Markdown
Building the model
###Code
DD_Net = ddnet.create_DDNet(C)
DD_Net.summary()
###Output
Model: "model_12"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
M (InputLayer) (None, 32, 105) 0
__________________________________________________________________________________________________
P (InputLayer) (None, 32, 15, 2) 0
__________________________________________________________________________________________________
model_11 (Model) (None, 4, 256) 436544 M[0][0]
P[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_6 (GlobalM (None, 256) 0 model_11[1][0]
__________________________________________________________________________________________________
dense_16 (Dense) (None, 128) 32768 global_max_pooling1d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_101 (BatchN (None, 128) 512 dense_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_101 (LeakyReLU) (None, 128) 0 batch_normalization_101[0][0]
__________________________________________________________________________________________________
dropout_11 (Dropout) (None, 128) 0 leaky_re_lu_101[0][0]
__________________________________________________________________________________________________
dense_17 (Dense) (None, 128) 16384 dropout_11[0][0]
__________________________________________________________________________________________________
batch_normalization_102 (BatchN (None, 128) 512 dense_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_102 (LeakyReLU) (None, 128) 0 batch_normalization_102[0][0]
__________________________________________________________________________________________________
dropout_12 (Dropout) (None, 128) 0 leaky_re_lu_102[0][0]
__________________________________________________________________________________________________
dense_18 (Dense) (None, 21) 2709 dropout_12[0][0]
==================================================================================================
Total params: 489,429
Trainable params: 486,357
Non-trainable params: 3,072
__________________________________________________________________________________________________
###Markdown
Train, Test and Save/Load the Model Train and plot loss/accuracy
###Code
import keras
from keras import backend as K
from keras.optimizers import *
# K.set_session(tf.Session(config=tf.ConfigProto(intra_op_parallelism_threads=32, inter_op_parallelism_threads=16)))
lr = 1e-3
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=1e-5)
history1 = DD_Net.fit([X_0,X_1],Y_input,
batch_size=len(Y_input),
epochs=800,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test_input)
)
lr = 1e-4
DD_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy'])
lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
history2 = DD_Net.fit([X_0,X_1],Y_input,
batch_size=len(Y_input),
epochs=600,
verbose=True,
shuffle=True,
callbacks=[lrScheduler],
validation_data=([X_test_0,X_test_1],Y_test_input)
)
%matplotlib inline
# the first 600 epochs
plot_accuracy(history1)
plot_loss(history1)
# the next 500 epochs
plot_accuracy(history2)
plot_loss(history2)
###Output
_____no_output_____
###Markdown
Plot confusion matrix
###Code
Y_test_pred = DD_Net.predict([X_test_0, X_test_1])
Y_test_pred_cls = np.argmax(Y_test_pred, axis=1)
Y_test_cls = np.argmax(Y_test, axis=1)
Y_test_cls[:10], Y_test_pred_cls[:10]
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
normalize= None # 'true'
cm = confusion_matrix(Y_test_cls, Y_test_pred_cls, normalize=normalize)
# print(cm)
# print(np.sum(np.diagonal(cm)) / np.sum(cm)) # accuracy
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=le.classes_)
fig, ax = plt.subplots(figsize=(10,10))
disp.plot(xticks_rotation=90, ax=ax)
###Output
_____no_output_____
###Markdown
Save/Load Model
###Code
model_path = 'jhmdb_lite_model.h5'
ddnet.save_DDNet(DD_Net, model_path)
# Load the model back from disk
new_net = ddnet.load_DDNet(model_path)
# Evaluate against test set, you should get the same accuracy
new_net.evaluate([X_test_0,X_test_1],Y_test)
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
##########################
## Imputation
##########################
from sklearn.preprocessing import MinMaxScaler
def per_frame_normalized(p):
# mean and std is calculated within a frame, across all joints
# separately for x,y
mean = np.nanmean(p, axis=(1,))
std = np.nanstd(p, axis=(1,))
return (p - np.expand_dims(mean, 1)) / np.expand_dims(std, 1)
def per_video_normalize(p):
# noramlize x and y separate
mean = np.nanmean(p, axis=(0,1))
std = np.nanstd(p, axis=(0,1))
return (p - mean) / std
# all_frames_normalized = np.concatenate(list(map(per_frame_normalized, X_interp)))
# print(all_frames_normalized.shape)
# print(all_frames_normalized[0])
# print(count_nan(all_frames_normalized))
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer, SimpleImputer
imp = IterativeImputer(max_iter=10, random_state=0, initial_strategy='mean', verbose=1)
# imp = SimpleImputer(missing_values=np.nan, strategy='mean')
all_frames_normalized_flat_imputed = imp.fit_transform(all_frames_normalized.reshape((all_frames_normalized.shape[0], -1)))
print(all_frames_normalized_flat_imputed[0])
def impute(p, imp):
# per-frame normalize
mean = np.nanmean(p, axis=(1,))
std = np.nanstd(p, axis=(1,))
p_normalized = (p - np.expand_dims(mean, 1)) / np.expand_dims(std, 1)
# impute
q = p_normalized.reshape((p_normalized.shape[0], -1))
q = imp.transform(q)
q = q.reshape(p.shape)
print(q.shape)
# per-frame de-normalize
return (q * np.expand_dims(std, 1) ) + np.expand_dims(mean, 1)
# def per_frame_impute(p, imp):
# q = np.empty_like(p)
# for i, frame in enumerate(p):
# scaler = MinMaxScaler()
# frame_scaled = scaler.fit_transform(frame)
# f_flat = frame_scaled.reshape((1, -1))
# f_flat_imputed = imp.transform(f_flat)
# f_imputed = f_flat_imputed.reshape(frame.shape)
# frame_imputed = scaler.inverse_transform(f_imputed)
# q[i] = frame_imputed
# return q
print(impute(X_interp[0], imp)[0])
X_imputed = [per_video_normalize(impute(p, imp)) for p in X_interp]
X_test_imputed = [per_video_normalize(impute(p, imp)) for p in X_test_interp]
###Output
_____no_output_____ |
_posts/python-v3/3d/3d-camera-controls/3d-camera-controls.ipynb | ###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
###Output
_____no_output_____
###Markdown
Import Data
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
# Read data from a csv
z_data = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/api_docs/mt_bruno_elevation.csv')
data = [
go.Surface(
z=z_data.as_matrix()
)
]
layout = go.Layout(
title='Mt Bruno Elevation',
autosize=False,
width=600,
height=600,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
###Output
_____no_output_____
###Markdown
Default ParamsThe camera position is determined by three vectors: *up*, *center*, *eye*.The up vector determines the up direction on the page. The default is $(x=0, y=0, z=1)$, that is, the z-axis points up.The center vector determines the translation about the center of the scene. By default, there is no translation: the center vector is $(x=0, y=0, z=0)$.The eye vector determines the camera view point about the origin. The default is $(x=1.25, y=1.25, z=1.25)$.
###Code
name = 'default'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=1.25, y=1.25, z=1.25)
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Lower the View Point
###Code
name = 'eye = (x:2, y:2, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=2, y=2, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
X-Z plane
###Code
name = 'eye = (x:0.1, y:2.5, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=2.5, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Y-Z plane
###Code
name = 'eye = (x:2.5, y:0.1, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=2.5, y=0.1, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
View from Above
###Code
name = 'eye = (x:0.1, y:0.1, z:2.5)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=0.1, z=2.5)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Zooming In... by reducing the norm the eye vector.
###Code
name = 'eye = (x:0.1, y:0.1, z:1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=0.1, z=1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Reference See https://plot.ly/python/reference/layout-scene-camera for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'3d-camera-controls.ipynb', 'python/3d-camera-controls/', 'Python 3D Camera Controls | plotly',
'How to Control the Camera in your 3D Charts in Python with Plotly.',
title= 'Python 3D Camera Controls | plotly',
name = '3D Camera Controls',
has_thumbnail='true', thumbnail='thumbnail/3d-camera-controls.jpg',
language='python',
display_as='3d_charts', order=0.108,
ipynb= '~notebook_demo/78')
###Output
_____no_output_____
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).You can set up Plotly to work in [online](https://plotly.com/python/getting-started/initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
###Output
_____no_output_____
###Markdown
Import Data
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
# Read data from a csv
z_data = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/api_docs/mt_bruno_elevation.csv')
data = [
go.Surface(
z=z_data.as_matrix()
)
]
layout = go.Layout(
title='Mt Bruno Elevation',
autosize=False,
width=600,
height=600,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
###Output
_____no_output_____
###Markdown
Default ParamsThe camera position is determined by three vectors: *up*, *center*, *eye*.The up vector determines the up direction on the page. The default is $(x=0, y=0, z=1)$, that is, the z-axis points up.The center vector determines the translation about the center of the scene. By default, there is no translation: the center vector is $(x=0, y=0, z=0)$.The eye vector determines the camera view point about the origin. The default is $(x=1.25, y=1.25, z=1.25)$.
###Code
name = 'default'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=1.25, y=1.25, z=1.25)
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Lower the View Point
###Code
name = 'eye = (x:2, y:2, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=2, y=2, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
X-Z plane
###Code
name = 'eye = (x:0.1, y:2.5, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=2.5, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Y-Z plane
###Code
name = 'eye = (x:2.5, y:0.1, z:0.1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=2.5, y=0.1, z=0.1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
View from Above
###Code
name = 'eye = (x:0.1, y:0.1, z:2.5)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=0.1, z=2.5)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Zooming In... by reducing the norm the eye vector.
###Code
name = 'eye = (x:0.1, y:0.1, z:1)'
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0.1, y=0.1, z=1)
)
fig['layout'].update(
scene=dict(camera=camera),
title=name
)
py.iplot(fig, validate=False, filename=name)
###Output
_____no_output_____
###Markdown
Reference See https://plotly.com/python/reference/layout-scene-camera for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'3d-camera-controls.ipynb', 'python/3d-camera-controls/', 'Python 3D Camera Controls | plotly',
'How to Control the Camera in your 3D Charts in Python with Plotly.',
title= 'Python 3D Camera Controls | plotly',
name = '3D Camera Controls',
has_thumbnail='true', thumbnail='thumbnail/3d-camera-controls.jpg',
language='python',
display_as='3d_charts', order=0.108,
ipynb= '~notebook_demo/78')
###Output
_____no_output_____ |
tutorials/pruning/basic.ipynb | ###Markdown
Network PruningNetwork pruning is a commonly-used technique to speed up your model during inference. We will talk about this topic in this tutorial. Basic conceptAs we all know, the majority of the runtime is attributed to the generic matrix multiply (a.k.a. GEMM) operations. So naturally, the problem comes out that whether we can speed up the operation by reducing the number of the elements in the matrices. By setting the weights, biases and the corresponding input and output items to 0, we can then just skip those calculations.There are generally two kinds of pruning, structured pruning and unstructured pruning. For structured pruning, the weight connections are removed in groups. e.g. The entire channel is deleted. It has the effect of changing the input and output shapes of layers and the weight matrices. Because of this, nearly every system can benefit from it. Unstructured pruning, on the other hand, removes individual weight connections from a network by setting them to 0. So, it is highly dependent on the inference backends. Currently, only structured pruning is supported in TinyNeuralNetwork. How structured pruning is implemented in DNN frameworks?```pymodel = Net(pretrained=True)sparsity = 0.5masks = {None: None}def register_masks(layer): parent_layer = get_parent(layer) input_mask = masks[parent_layer] if is_passthrough_layer(layer): output_mask = input_mask else: output_mask = get_mask(layer, sparsity) register_mask(layer, input_mask, output_mask) masks[layer] = output_maskmodel.apply(register_masks)model.fit(train_data)def apply_masks(layer): parent_layer = get_parent(layer) input_mask = masks[parent_layer] output_mask = masks[layer] apply_mask(layer, input_mask, output_mask)model.apply(apply_masks)``` Network Pruning in TinyNerualNetworkThe problem in the previous code example is that only one parent layer is expected. But in some recent DNN models, there are a few complicated operations like `cat`, `add` and `split`. We need to resolve the dependencies of those operations as well.To solve the aforementioned problem, first we go through some basic definitions. When the input shape and output shape of a node are not related during pruning, it is called a node with isolation. For example, the `conv`, `linear` and `lstm` nodes are nodes with isolation. We want to find out a group of nodes, which is called a subgraph, that starts with and ends with nodes with isolation and doesn't contain a subgraph in it. We use the nodes with isolation for finding out the candidate subgraphs in the model. ```pydef find_subgraph(layer, input_modify, output_modify, nodes): if layer in nodes: return None nodes.append(layer) if is_layer_with_isolation(layer): if input_modify: for prev_layer in get_parent(layer): return find_subgraph(prev_layer, False, True, nodes) if output_modify: for next_layer in get_child(layer): return find_subgraph(next_layer, True, False, nodes) else: for prev_layer in get_parent(layer): return find_subgraph(prev_layer, input_modify, output_modify, nodes) for next_layer in get_child(layer): return find_subgraph(next_layer, input_modify, output_modify, nodes)candidate_subgraphs = []def construct_candidate_subgraphs(layer): if is_layer_with_isolation(layer): nodes = [] find_subgraph(layer, True, False, nodes) candidate_subgraphs.append(nodes) nodes = [] find_subgraph(layer, False, True, nodes) candidate_subgraphs.append(nodes)model.apply(construct_subgraphs)```With all candidate subgraphs, the next step we do is to remove the duplicated and invalid ones in them. Due to space limitations, we will not cover this section in detail. When we get the final subgraphs, the first node in it is called the center node. During configuration, we use the name of the center node to represent the subgraph it constructs. Some properties can be set at the subgraph level by the user, like sparsity.Although we have the subgraphs, the mapping of channels between nodes is still unknown. So we need to resolve channel dependency. Similarly, we pass the channel information recursively so as to get the correct mapping at each node. It may be a bit more complicated since each node has its own logic for sharing channel mapping. Operations like `add` require shared mapping in all the input and output tensors, while `cat` allows the inputs to have independent mappings, however the output mapping and the combined input mapping is shared. As this is too detailed, we will not expand on it.After resolving the channel dependency, we follow the ordinary pruning process, that is to register the masks of the weight and bias tensors. And then you may just finetune the model. When the training process is finished, then it is time to apply the masks, so that the model actually gets smaller. Alternatively, you may apply the masks just after registering them if the masks won't change during training. As a result, the training process will be significantly faster. That's all the story for pruning. Using the pruner in TinyNeuralNetworkIt is really simple to use the pruner in our framework. You can use the code below.
###Code
import sys
sys.path.append('../..')
import torch
import torchvision
from tinynn.prune.oneshot_pruner import OneShotChannelPruner
model = torchvision.models.mobilenet_v2(pretrained=True)
model.train()
dummy_input = torch.randn(1, 3, 224, 224)
pruner = OneShotChannelPruner(model, dummy_input, config={'sparsity': 0.25, 'metrics': 'l2_norm'})
st_flops = pruner.calc_flops()
pruner.prune()
ed_flops = pruner.calc_flops()
print(f"Pruning over, reduced FLOPS {100 * (st_flops - ed_flops) / st_flops:.2f}% ({st_flops} -> {ed_flops})")
# You should start finetuning the model here
###Output
INFO (tinynn.graph.modifier) [CONV] features_0_0: output 32 -> 24
INFO (tinynn.graph.modifier) [BN] features_0_1: channel 32 -> 24
INFO (tinynn.graph.modifier) [DW_CONV] features_1_conv_0_0: input 32 -> 24
INFO (tinynn.graph.modifier) [BN] features_1_conv_0_1: channel 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_1_conv_1: input 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_1_conv_1: output 16 -> 12
INFO (tinynn.graph.modifier) [BN] features_1_conv_2: channel 16 -> 12
INFO (tinynn.graph.modifier) [CONV] features_2_conv_0_0: input 16 -> 12
INFO (tinynn.graph.modifier) [CONV] features_2_conv_0_0: output 96 -> 72
INFO (tinynn.graph.modifier) [BN] features_2_conv_0_1: channel 96 -> 72
INFO (tinynn.graph.modifier) [DW_CONV] features_2_conv_1_0: input 96 -> 72
INFO (tinynn.graph.modifier) [BN] features_2_conv_1_1: channel 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_2_conv_2: input 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_2_conv_2: output 24 -> 18
INFO (tinynn.graph.modifier) [CONV] features_3_conv_0_0: input 24 -> 18
INFO (tinynn.graph.modifier) [CONV] features_3_conv_0_0: output 144 -> 108
INFO (tinynn.graph.modifier) [BN] features_3_conv_0_1: channel 144 -> 108
INFO (tinynn.graph.modifier) [DW_CONV] features_3_conv_1_0: input 144 -> 108
INFO (tinynn.graph.modifier) [BN] features_3_conv_1_1: channel 144 -> 108
INFO (tinynn.graph.modifier) [CONV] features_3_conv_2: input 144 -> 108
INFO (tinynn.graph.modifier) [CONV] features_3_conv_2: output 24 -> 18
INFO (tinynn.graph.modifier) [BN] features_2_conv_3: channel 24 -> 18
INFO (tinynn.graph.modifier) [BN] features_3_conv_3: channel 24 -> 18
INFO (tinynn.graph.modifier) [CONV] features_4_conv_0_0: input 24 -> 18
INFO (tinynn.graph.modifier) [CONV] features_4_conv_0_0: output 144 -> 108
INFO (tinynn.graph.modifier) [BN] features_4_conv_0_1: channel 144 -> 108
INFO (tinynn.graph.modifier) [DW_CONV] features_4_conv_1_0: input 144 -> 108
INFO (tinynn.graph.modifier) [BN] features_4_conv_1_1: channel 144 -> 108
INFO (tinynn.graph.modifier) [CONV] features_4_conv_2: input 144 -> 108
INFO (tinynn.graph.modifier) [CONV] features_4_conv_2: output 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_5_conv_0_0: input 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_5_conv_0_0: output 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_5_conv_0_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [DW_CONV] features_5_conv_1_0: input 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_5_conv_1_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_5_conv_2: input 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_5_conv_2: output 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_6_conv_0_0: input 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_6_conv_0_0: output 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_6_conv_0_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [DW_CONV] features_6_conv_1_0: input 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_6_conv_1_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_6_conv_2: input 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_6_conv_2: output 32 -> 24
INFO (tinynn.graph.modifier) [BN] features_4_conv_3: channel 32 -> 24
INFO (tinynn.graph.modifier) [BN] features_5_conv_3: channel 32 -> 24
INFO (tinynn.graph.modifier) [BN] features_6_conv_3: channel 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_7_conv_0_0: input 32 -> 24
INFO (tinynn.graph.modifier) [CONV] features_7_conv_0_0: output 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_7_conv_0_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [DW_CONV] features_7_conv_1_0: input 192 -> 144
INFO (tinynn.graph.modifier) [BN] features_7_conv_1_1: channel 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_7_conv_2: input 192 -> 144
INFO (tinynn.graph.modifier) [CONV] features_7_conv_2: output 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_8_conv_0_0: input 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_8_conv_0_0: output 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_8_conv_0_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [DW_CONV] features_8_conv_1_0: input 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_8_conv_1_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_8_conv_2: input 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_8_conv_2: output 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_9_conv_0_0: input 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_9_conv_0_0: output 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_9_conv_0_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [DW_CONV] features_9_conv_1_0: input 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_9_conv_1_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_9_conv_2: input 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_9_conv_2: output 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_10_conv_0_0: input 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_10_conv_0_0: output 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_10_conv_0_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [DW_CONV] features_10_conv_1_0: input 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_10_conv_1_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_10_conv_2: input 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_10_conv_2: output 64 -> 48
INFO (tinynn.graph.modifier) [BN] features_7_conv_3: channel 64 -> 48
INFO (tinynn.graph.modifier) [BN] features_8_conv_3: channel 64 -> 48
INFO (tinynn.graph.modifier) [BN] features_9_conv_3: channel 64 -> 48
INFO (tinynn.graph.modifier) [BN] features_10_conv_3: channel 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_11_conv_0_0: input 64 -> 48
INFO (tinynn.graph.modifier) [CONV] features_11_conv_0_0: output 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_11_conv_0_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [DW_CONV] features_11_conv_1_0: input 384 -> 288
INFO (tinynn.graph.modifier) [BN] features_11_conv_1_1: channel 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_11_conv_2: input 384 -> 288
INFO (tinynn.graph.modifier) [CONV] features_11_conv_2: output 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_12_conv_0_0: input 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_12_conv_0_0: output 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_12_conv_0_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [DW_CONV] features_12_conv_1_0: input 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_12_conv_1_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_12_conv_2: input 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_12_conv_2: output 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_13_conv_0_0: input 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_13_conv_0_0: output 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_13_conv_0_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [DW_CONV] features_13_conv_1_0: input 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_13_conv_1_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_13_conv_2: input 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_13_conv_2: output 96 -> 72
INFO (tinynn.graph.modifier) [BN] features_11_conv_3: channel 96 -> 72
INFO (tinynn.graph.modifier) [BN] features_12_conv_3: channel 96 -> 72
INFO (tinynn.graph.modifier) [BN] features_13_conv_3: channel 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_14_conv_0_0: input 96 -> 72
INFO (tinynn.graph.modifier) [CONV] features_14_conv_0_0: output 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_14_conv_0_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [DW_CONV] features_14_conv_1_0: input 576 -> 432
INFO (tinynn.graph.modifier) [BN] features_14_conv_1_1: channel 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_14_conv_2: input 576 -> 432
INFO (tinynn.graph.modifier) [CONV] features_14_conv_2: output 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_15_conv_0_0: input 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_15_conv_0_0: output 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_15_conv_0_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [DW_CONV] features_15_conv_1_0: input 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_15_conv_1_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_15_conv_2: input 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_15_conv_2: output 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_16_conv_0_0: input 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_16_conv_0_0: output 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_16_conv_0_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [DW_CONV] features_16_conv_1_0: input 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_16_conv_1_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_16_conv_2: input 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_16_conv_2: output 160 -> 120
INFO (tinynn.graph.modifier) [BN] features_14_conv_3: channel 160 -> 120
INFO (tinynn.graph.modifier) [BN] features_15_conv_3: channel 160 -> 120
INFO (tinynn.graph.modifier) [BN] features_16_conv_3: channel 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_17_conv_0_0: input 160 -> 120
INFO (tinynn.graph.modifier) [CONV] features_17_conv_0_0: output 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_17_conv_0_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [DW_CONV] features_17_conv_1_0: input 960 -> 720
INFO (tinynn.graph.modifier) [BN] features_17_conv_1_1: channel 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_17_conv_2: input 960 -> 720
INFO (tinynn.graph.modifier) [CONV] features_17_conv_2: output 320 -> 240
INFO (tinynn.graph.modifier) [BN] features_17_conv_3: channel 320 -> 240
INFO (tinynn.graph.modifier) [CONV] features_18_0: input 320 -> 240
INFO (tinynn.graph.modifier) [CONV] features_18_0: output 1280 -> 960
INFO (tinynn.graph.modifier) [BN] features_18_1: channel 1280 -> 960
INFO (tinynn.graph.modifier) [FC] classifier_1: input 1280 -> 960
|
I. The Graph Data Structures.ipynb | ###Markdown
The NetworkX Module NetworkX is a python module. To start exploring NetworkX we simply need to start a python session (Like the IPython session you are in now!), and type
###Code
import networkx
###Output
_____no_output_____
###Markdown
All of NetworkX's data structures and functions can then be accessed using the syntax `networkx.[Object]`, where `[Object]` is the function or data structure you need. Of course you would replace `[Object]` with the function you wanted. For example to make a graph, we'd write:
###Code
G = networkx.Graph()
###Output
_____no_output_____
###Markdown
Usually to save ourselves some keystrokes, we'll import NetworkX using a shorter variable name
###Code
import networkx as nx
###Output
_____no_output_____
###Markdown
Basic Graph Data Structures One of the main strengths of NetworkX is its flexible graph data structures. There are four data structures - `Graph`: Undirected Graphs - `DiGraph`: Directed Graphs - `MultiGraph`: Undirected multigraphs, ie graphs which allow for multiple edges between nodes - `MultiDiGraph`: Directed Multigraphs Each of these has the same basic structure, attributes and features, with a few minor differences. Creating Graphs Creating Graphs is as simple as calling the appropriate constructor.
###Code
G = nx.Graph()
D = nx.DiGraph()
M = nx.MultiGraph()
MD = nx.MultiDiGraph()
###Output
_____no_output_____
###Markdown
You can also add attributes to a graph during creation, either by providing a dictionary, or simply using keyword arguments
###Code
G = nx.Graph(DateCreated='2015-01-10',name="Terry")
G.graph
###Output
_____no_output_____
###Markdown
The graph attribute is just a dictionary and can be treated as one, so you can add and delete more information from it.
###Code
G.graph['Current']=False
del G.graph['name']
G.graph
###Output
_____no_output_____
###Markdown
Nodes Next we'll cover how to add and remove nodes, as well as check for their existance in a graph and add attributes to both! Adding Nodes There are two main functions for adding nodes. `add_node`, and `add_nodes_from`. The former takes single values, and the latter takes any iterable (list, set, iterator, generator). Nodes can be of any _immutable_ type. This means numbers (ints and floats complex), strings, bytes, tuples or frozen sets. They cannot be _mutable_, such as lists, dictionaries or sets. Nodes in the same graph do not have to be of the same type
###Code
# Adding single nodes of various types
G.add_node(0)
G.add_node('A')
G.add_node(('x',1.2))
# Adding collections of nodes
G.add_nodes_from([2,4,6,8,10])
G.add_nodes_from(set([10+(3*i)%5 for i in range(10,50)]))
###Output
_____no_output_____
###Markdown
Listing Nodes Accessing nodes is done using the `nodes` function which is a member of the `Graph` object.
###Code
G.nodes()
###Output
_____no_output_____
###Markdown
Sometimes to save memory we might only want to access a list of nodes one at a time, so we can use an _iterator_. These are especially useful in long running loops to save memory.
###Code
for n in G.nodes_iter():
if type(n)== str:
print(n + ' is a string!')
else:
print(str(n) + " is not a string!")
###Output
_____no_output_____
###Markdown
In the future more functions of NetworkX will exclusively use iterators to save memory and be more Python 3 like... Checking whether nodes are in a Graph We can also check to see if a graph has a node several different ways. The easiest is just using the `in` keyword in python, but there is also the `has_node` function.
###Code
13 in G
9 in G
G.has_node(13)
G.has_node(9)
###Output
_____no_output_____
###Markdown
Node attributes You can also add attributes to nodes. This can be handy for storing information about nodes within the graph object. This can be done when you create new nodes using keyword arguments to the `add_node` and `add_nodes_from` function
###Code
G.add_node('Spam',company='Hormel',food='meat')
###Output
_____no_output_____
###Markdown
When using `add_nodes_from` you provide a tuple with the first element being the node, and the second being a dictionary of attributes for that node. You can also add attributes which will be applied to all added nodes using keyword arguments
###Code
G.add_nodes_from([('Bologna',{'company':'Oscar Meyer'}),
('Bacon',{'company':'Wright'}),
('Sausage',{'company':'Jimmy Dean'})],food='meat')
###Output
_____no_output_____
###Markdown
To list node attributes you need to provide the `data=True` keyword to the `nodes` and `nodes_iter` functions
###Code
G.nodes(data=True)
###Output
_____no_output_____
###Markdown
Attributes are stored in a special dictionary within the graph called `node` you can access, edit and remove attributes there
###Code
G.node['Spam']
G.node['Spam']['Delicious'] = True
G.node[6]['integer'] = True
G.nodes(data=True)
del G.node[6]['integer']
G.nodes(data=True)
###Output
_____no_output_____
###Markdown
Similiarly, you can remove nodes with the `remove_node` and `remove_nodes_from` functions
###Code
G.remove_node(14)
G.remove_nodes_from([10,11,12,13])
G.nodes()
###Output
_____no_output_____
###Markdown
Exercises Repeated Nodes 1. What happens when you add nodes to a graph that already exist?2. What happens when you add nodes to the graph that already exist but have new attributes?3. What happens when you add nodes to a graph with attributes different from existing nodes?4. Try removing a node that doesn't exist, what happens? The FizzBuzz Graph Using the spaces provided below make a new graph, `FizzBuzz`. Add nodes labeled 0 to 100 to the graph. Each node should have an attribute 'fizz' and 'buzz'. If the nodes label is divisble by 3 `fizz=True` if it is divisble by 5 `buzz=True`, otherwise both are false. Edges Adding edges is similar to adding nodes. They can be added, using either `add_edge` or `add_edges_from`. They can also have attributes in the same way nodes can. If you add an edge that includes a node that doesn't exist it will create it for you
###Code
G.add_edge('Bacon','Sausage',breakfast=True)
G.add_edge('Ham','Bacon',breakfast=True)
G.add_edge('Spam','Eggs',breakfast=True)
###Output
_____no_output_____
###Markdown
Here we are using a list comprehension. This is an easy way to construct lists using a single line. Learn more about list comprehensions [here](https://docs.python.org/2/tutorial/datastructures.htmllist-comprehensions).
###Code
G.add_edges_from([(i,i+2) for i in range(2,8,2)])
G.edges()
G.edges(data=True)
###Output
_____no_output_____
###Markdown
Removing edges is accomplished by using the `remove_edge` or `remove_edges_from` function. Remove edge attributes can be done by indexing into the graph
###Code
G['Spam']['Eggs']
del G['Spam']['Eggs']['breakfast']
G.remove_edge(2,4)
G.edges(data=True)
###Output
_____no_output_____
###Markdown
You can check for the existance of edges with `has_edge`
###Code
G.has_edge(2,4)
G.has_edge('Ham','Bacon')
###Output
_____no_output_____
###Markdown
For directed graphs, ordering matters. `add_edge(u,v)` will add an edge from `u` to `v`
###Code
D.add_nodes_from(range(10))
D.add_edges_from([(i,i+1 % 10) for i in range(0,10)])
D.edges()
D.has_edge(0,1)
D.has_edge(1,0)
###Output
_____no_output_____
###Markdown
You can also access edges for only a subset of nodes by passing edges a collection of nodes
###Code
D.edges([3,4,5])
###Output
_____no_output_____
###Markdown
Exercises For the `FizzBuzz` graph above, add edges betweeen two nodes `u` and `v` if they are both divisible by 2 or by 7. Each edge should include attributes `div2` and `div7` which are true if `u` and `v` are divisible by 2 and 7 respecitively. Exclude self loops. Multigraphs Multigraphs can have multiple edges between any two nodes. They are referenced by a key.
###Code
M.add_edge(0,1)
M.add_edge(0,1)
M.edges()
###Output
_____no_output_____
###Markdown
The keys of the edges can be accessed by using the keyword `keys=True`. This will give a tuple of `(u,v,k)`, with the edge being `u` and `v` and the key being `k`.
###Code
M.edges(keys=True)
###Output
_____no_output_____
###Markdown
`MultiDraphs` and `MultiDiGraphs` are similar to `Graphs` and `DiGraphs` in most respects Adding Graph Motifs In addition to adding nodes and edges one at a time networkx has some convenient functions for adding complete subgraphs. But beware, these may be removed, or the API changed in the future.
###Code
G.add_cycle(range(100,110))
G.edges()
###Output
_____no_output_____
###Markdown
Basic Graph Properties Basic graph properties are functions which are member of the `Graph` class itself. We'll explore different metrics in part III. Node and Edge Counts The _order_ of a graph is the number of nodes, it can be accessed by calling `G.order()` or using the builtin length function: `len(G)`.
###Code
G.order()
len(G)
###Output
_____no_output_____
###Markdown
The number of edges is usually referred to as the _size_ of the graph, and can be accessed by `G.size()`. You could also find out by calling `len(G.edges())`, but this is much slower.
###Code
G.size()
###Output
_____no_output_____
###Markdown
For multigraphs it counts the number of edges includeing multiplicity
###Code
M.size()
###Output
_____no_output_____
###Markdown
Node Neighbors Node neighbors can be accessed via the `neighbors`
###Code
G.neighbors('Bacon')
###Output
_____no_output_____
###Markdown
In the case of directed graphs, neighbors are only those originating at the node.
###Code
D.add_edges_from([(0,i) for i in range(5,10)])
D.neighbors(0)
###Output
_____no_output_____
###Markdown
For multigraphs, neighbors are only reported once.
###Code
M.neighbors(0)
###Output
_____no_output_____
###Markdown
Degree The degree of a graph can be found using the `degree` function for undirected graphs, and `in_degree` and `out_degree` for directed graphs. They both return a dictionary with the node as the keys of the dictionary and the degree as the value
###Code
G.degree()
D.in_degree()
D.out_degree()
###Output
_____no_output_____
###Markdown
Both of these can be called on a single node or a subset of nodes if not all degrees are needed
###Code
D.in_degree(5)
D.out_degree([0,1,2])
###Output
_____no_output_____
###Markdown
You can also calculate weighted degree. To do this each edge has to have specific attribute to be used as a weight.
###Code
WG = nx.Graph()
WG.add_star(range(5))
WG.add_star(range(5,10))
WG.add_edges_from([(i,2*i %10) for i in range(10)])
for (u,v) in WG.edges_iter():
WG[u][v]['product'] = (u+1)*(v+1)
WG.degree(weight='product')
###Output
_____no_output_____
###Markdown
Exercises Create A Classroom Graph Let's make a network of the people in this room. First, create a graph called `C`. Everyone state their name (one at a time) and where they are from. Add nodes to the graph representing each individual, with an attribute denoting where they are from. Add edges to the graph between an individual and their closest three classmates. Have each edge have an attribute that indicates whether there was a previous relationship between the two. If none existed have `relationship=None`, if it does exist have the relationship stated, e.g. `relationship='Cousin-in-law'` How many nodes are in the Graph? How many Edges? What is the degree of the graph? Quickly Saving a Graph In the next section we'll learn more about saving and loading graphs, as well as operations on graphs, but for now just run the code below.
###Code
nx.write_gpickle(C,'./data/Classroom.pickle')
###Output
_____no_output_____ |
assignments/assignment3.ipynb | ###Markdown
Assignment 3Welcome to the third programming assignment for the course. This assignments will help to familiarise you with Boolean function oracles while revisiting the topics discussed in this week's lectures. Submission GuidelinesFor final submission, and to ensure that you have no errors in your solution, please use the 'Restart and Run All' option availble in the Kernel menu at the top of the page. To submit your solution, run the completed notebook and attach the solved notebook (with results visible) as a .ipynb file using the 'Add or Create' option under the 'Your Work' heading on the assignment page in Google Classroom.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import *
from qiskit.quantum_info import *
basis_gates = ['id', 'x', 'y', 'z', 's', 't', 'sdg', 'tdg', 'h', 'p', 'sx' ,'r', 'rx', 'ry', 'rz', 'u', 'u1', 'u2', 'u3', 'cx', 'ccx', 'barrier', 'measure', 'snapshot']
###Output
_____no_output_____
###Markdown
A quantum oracle implementation of the classical OR operationWe've already seen that the Toffoli gate implements the quantum version of the classical AND operation. The first part of this exercise will require you to construct such a quantum implementation for the OR operation.The logical OR operation takes two Boolean inputs and returns 1 as the result if either or both of the inputs are 1. It is often denoted using the $\vee$ symbol (it is also called the disjunction operation). The truth table for the classical OR operation is given below:| $x$ | $y$ | $x\vee y$ ||----- |----- |----------- || 0 | 0 | 0 || 0 | 1 | 1 || 1 | 0 | 1 || 1 | 1 | 1 | De Morgan's lawsFinding a gate that is the direct quantum analogue of the OR operation might prove to be difficult. Luckily, there are a set of two relation in Boolean algebra that can provide a helpful workaround. $$\overline{x\vee y} = \overline{x} \wedge \overline{y}$$This is read as _not ($x$ or $y$) = not $x$ and not $y$_$$\overline{x\wedge y} = \overline{x} \vee \overline{y}$$This is read as _not ($x$ and $y$) = not $x$ or not $y$_ **Problem 1**1. Using the expressions for De Morgan's laws above, construct a Boolean formula for $x \vee y$ consisting only of the logical AND and NOT operations. 2. We have provided the `QuantumCircuit()` for a quantum bit oracle to implement the OR operation. Apply the appropriate gates to this circuit based on the expression calculated in Step 1. Do NOT add a measurementWarning: Please be careful to ensure that the circuit below matches the oracle structure i.e. the input qubit states are not altered after the operation of the oracle.
###Code
or_oracle = QuantumCircuit(3)
# Do not change below this line
or_oracle.draw(output='mpl')
or_tt = ['000', '011', '101', '111']
def check_or_oracle(tt_row):
check_qc = QuantumCircuit(3)
for i in range(2):
if (tt_row[i] == '1'):
check_qc.x(i)
check_qc.extend(or_oracle)
check_qc.measure_all()
return (execute(check_qc.reverse_bits(),backend=QasmSimulator(), shots=1).result().get_counts().most_frequent() == tt_row)
try:
assert list(or_oracle.count_ops()) != [], f"Circuit cannot be empty"
assert 'measure' not in or_oracle.count_ops(), f"Please remove measurements"
assert set(or_oracle.count_ops().keys()).difference(basis_gates) == set(), f"Only the following basic gates are allowed: {basis_gates}"
for tt_row in or_tt:
assert check_or_oracle(tt_row), f" Input {tt_row[0:2]}: Your encoding is not correct"
print("Your oracle construction passed all checks")
except AssertionError as e:
print(f'Your code has an error: {e.args[0]}')
except Exception as e:
print(f'This error occured: {e.args[0]}')
###Output
_____no_output_____
###Markdown
Linear functions and the Bernstein-Vazirani AlgorithmThe Deutch-Jozsa algorithm allows us to distinguish between constant and balanced Boolean functions. There is an extension to the Deutsch-Jozsa algorithm that allows us to extract some information about a certain other class of functions. This is what we will be exploring now. An $n$-bit Boolean function $f(x)$ is called linear if it can be written as the bitwise product of a particular $n$-bit binary string $a$ and the function variable $x$ (which is also a binary string of length $n$), i.e., linear functions can be written as $$f(x) = a\cdot x \;(\text{ mod } 2)$$You might recall from the discussion on the Hadamard transform, that for any general $n$-qubit computational basis state, the Hadamard transform has the following effect$$H^{\otimes n}|a\rangle = \frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}(-1)^{a\cdot x}|x\rangle$$Due to the self-inverting nature of the Hadamard transformation, we can apply $H^{\otimes n}$ to both sides of the above equation and get (after flipping sides)$$H^{\otimes n} \left( \frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}(-1)^{a\cdot x}|x\rangle \right) = |a\rangle$$The term inside the brackets on the left hand side of the equation looks like what we would get if we passed an equal superposition state through a phase oracle for the Boolean function $f(x) = a\cdot x \;(\text{ mod } 2)$. This is depicted in the equation below:$$\frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}|x\rangle \xrightarrow{U_f} \frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}(-1)^{a\cdot x}|x\rangle$$The Bernstein-Vazirani algorithm uses all the things discussed above. Given an oracle for a function that we know is linear, we can find the binary string $a$ corresponding to the linear function. The steps of the algorithm are shown in the equation below and then described in words.$$|0^{\otimes n}\rangle \xrightarrow{H^{\otimes n}} \frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}|x\rangle \xrightarrow{U_f} \frac{1}{2^{n/2}}\sum\limits_{x=0}^{2^n-1}(-1)^{a\cdot x}|x\rangle \xrightarrow{H^{\otimes n}} |a\rangle$$In the expression above, we've omitted (for readability) the mention of the extra qubit in the $|-\rangle$ state that is required for the oracle output, but it is necessary. **Problem 2**Consider the Boolean function $f(x) = (\overline{x_1} \wedge x_0) \vee (x_1 \wedge \overline{x_0})$. Take it as given that this function is a linear function. We want to find the 2-bit binary string $a$. Your objective is to use this expression above to implement the quantum bit oracle for this Boolean function. This is more complex than any expression we have seen so far, so the implementation will be carried out in a few steps. A `QuantumCircuit()` with 3 qubits is provided below.- $q_0$ and $q_1$ are the input qubits for the variables $x_0$ and $x_1$ respectively.- $q_2$ is the output qubit and stores the value of the final Boolean function expression
###Code
bv_oracle = QuantumCircuit(3)
bv_oracle.cx(0,2)
bv_oracle.cx(1,2)
bv_oracle.draw('mpl')
###Output
_____no_output_____
###Markdown
Using the bit oracle provided above, construct a circuit for the Bernstein-Vazirani algorithm.The steps for the algorithm are as follows:1. Start will $(n+1)$ qubits in the $|0\rangle$ state. Here $n=2$. The first two qubits $q_0$ and $q_1$ will serve as input to the oracle. The extra qubit is used for the oracle output. Since we need a phase oracle, add gates to prepare the state $|-\rangle$ in this qubit ($q_2$). 2. Apply an $H$ gate to all the input qubits. 3. Apply the oracle $U_f$ 4. Apply an $H$ gate to all the input qubits. 5. Measure the $n$ input qubits. If the function corresponding to $U_f$ is linear, the final state measured will be the binary string $a$.Astute readers will notice that the steps followed in the Bernstein-Vazirani and the Deutsch-jozsa algorithms are the same. `bv_circ` is a `QuantumCircuit(3,2)` given below. Add necessary operations to the circuit below to realise the steps for the Bernstein-Vazirani algorithm.
###Code
bv_circ = QuantumCircuit(3,2)
# Do not remove this line
bv_circ.draw(output='mpl')
try:
assert list(bv_circ.count_ops()) != [], f"Circuit cannot be empty"
assert set(bv_circ.count_ops().keys()).difference(basis_gates) == set(), f"Only the following basic gates are allowed: {basis_gates}"
counts = execute(bv_circ.reverse_bits(), backend=QasmSimulator(), shots=8192).result().get_counts()
assert list(counts.keys()) == ['11'], "Your circuit did not produce the right answer"
print(" Your circuit produced the correct output. Please submit for evaluation.")
except AssertionError as e:
print(f'Your code has an error: {e.args[0]}')
except Exception as e:
print(f'This error occured: {e.args[0]}')
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
CMSC6950 Assignment 3Due on Tuesday, May 29, 2018Solve all questions.*Any two identical submitted assignments from different students will be considered as copies of each other.*Save your assignment as an ipynb file with a filename ”userid_assignment3.ipynb”. [For userid, I mean your MUN user id/email .e.g. abc123 and not your student number]. Submit your assignment through the D2L dropbox. Name : Student Number: Question 1Define the following vectors and matrices:```vec1 = np.array([ -1., 4., -9.])mat1 = np.array([[ 1., 3., 5.], [7., -9., 2.], [4., 6., 8. ]]```1. You can multiply vectors by constants. Compute `vec2 = (np.pi/4) * vec1`2. The cosine function can be applied to a vector to yield a vector of cosines. Compute `vec2 = np.cos(vec2)`3. You can add vectors and multiply by scalars. Compute `vec3 = vec1 + 2 * vec2`4. The Euclidean norm of a matrix or a vector is available using `la.norm`. Compute `la.norm(vec3)`5. You can do row-column matrix multiplication. Compute the product of `mat1` and `vec3` and set `vec4` equal to the result.6. Compute the transpose of `mat1`.7. Compute the determinant of `mat1`.8. Find the smallest element in `vec1`.9. What function would you use to find the value of `j` so that `vec1[j]` is equal to the smallest element in `vec1`?10. What expression would you use to find the smallest element of the matrix `mat1`? Question 21. Generate and plot values (50,50) drawn from a normal distribution. Make a plot as shown in (a).2. Manipulate the array such that the lowest values are in the top-left corner, the highest in the bottom-right. Make a plot as shown in (b). Question 3Using the data available at [http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/hawaii-TAVG-Trend.txt](http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/hawaii-TAVG-Trend.txt), reproduce the following plot using Numpy, Matplotlib, and/or Scipy.[High resolution version](http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/hawaii-TAVG-Trend.pdf) Here's a nice way to ensure the data file is always available locally:
###Code
import urllib.request
urllib.request.urlretrieve('http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/hawaii-TAVG-Trend.txt',
'hawaii-TAVG-Trend.txt')
!ls -l 'hawaii-TAVG-Trend.txt'
###Output
-rw-------@ 1 jmunroe staff 157295 22 May 11:44 hawaii-TAVG-Trend.txt
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name below:
###Code
let name = ""
let rollno = ""
###Output
_____no_output_____
###Markdown
Important notes about grading:1. **Compiler errors:** All code you submit must compile. Programs that do not compile will probably receive an automatic zero. If you are having trouble getting your assignment to compile, please visit consulting hours. If you run out of time, it is better to comment out the parts that do not compile, than hand in a more complete file that does not compile.2. **Late assignments:** Please carefully review the course website's policy on late assignments, as all assignments handed in after the deadline will be considered late. Verify on moodle that you have submitted the correct version, before the deadline. Submitting the incorrect version before the deadline and realizing that you have done so after the deadline will be counted as a late submission. Mutability and ModulesIn this assignment, you will design and implement a couple of mutable data structures and operations on them. Problem 1Implement structures `IntShowable` and `FloatShowable` that satisfy the following signature. Use `string_of_int: int -> string` and `string_of_float: float -> string` for the corresponding `string_of_t` functions.
###Code
module type Showable = sig
type t
val string_of_t : t -> string
end
(* Implement modules IntShowable and FloatShowable *)
(* YOUR CODE HERE *)
raise (Failure "Not implemented")
assert (IntShowable.string_of_t 10 = "10");
assert (FloatShowable.string_of_t 0.0 = "0.")
###Output
_____no_output_____
###Markdown
Problem 2Implement a functor ```ocamlmodule MakeNode : functor (C : Showable) -> DoublyLinkedListNode with type content = C.t``` where `DoublyLinkedListNode` is the module type:
###Code
module type DoublyLinkedListNode = sig
type t
(** The type of doubly linked list node *)
type content
(** The type of content stored in the doubly linked list node *)
val create : content -> t
(** create a new doubly linked list node with [content] as the content.
The next and previous nodes are [None]. *)
val get_next : t -> t option
(** [get_next t] returns [Some t'] if [t'] is the successor node of [t].
If [t] has no successor, then return [None] *)
val get_prev : t -> t option
(** [get_prev t] returns [Some t'] if [t'] is the predecessor node of [t].
If [t] has no predecessor, then return [None] *)
val get_content : t -> content
(** [content t] returns the content [c] of node [t] *)
val set_next : t -> t option -> unit
(** [set_next t t'] updates the next node of [t] to be [t'] *)
val set_prev : t -> t option -> unit
(** [set_prev t t'] updates the prev node of [t] to be [t'] *)
val string_of_content : content -> string
(** [string_of_content c] returns the string form of content [c] *)
end
(* Implement the functor MakeNode *)
module MakeNode (C : Showable) : DoublyLinkedListNode with type content = C.t = struct
(* YOUR CODE HERE *)
raise (Failure "Not implemented")
end
module IntNode = MakeNode(IntShowable);;
let open IntNode in
let i1 = create 1 in
assert (get_content i1 = 1)
let open IntNode in
let i1,i2,i3 = create 1, create 2, create 3 in
set_next i1 @@ Some i2;
set_next i2 @@ Some i3;
set_prev i2 @@ Some i1;
set_prev i3 @@ Some i2;
let _ = assert (match get_next i1 with None -> false | Some i2 -> get_content i2 = 2) in
()
###Output
_____no_output_____
###Markdown
Problem 3Implement a functor ```ocaml module MakeList : functor (N : DoublyLinkedListNode) -> DoublyLinkedList with type node = N.t and type content = N.content```where the module type `DoublyLinkedList` is defined as follows:
###Code
module type DoublyLinkedList = sig
type t
(** The type of doubly linked list *)
type node
(** The type of a doubly linked list node *)
type content
(** The type of content stored in the [node] of doubly linked list *)
val create : unit -> t
(** Creates a new doubly linked list *)
val assign : t -> node option -> unit
(** [assign t n] makes the node as the head node of the list.
The original contents of the list are dropped. *)
val t_of_list : content list -> t
(** [t_of_list l] returns a new doubly linked list with the list of
elements from the list [l]. *)
val is_empty : t -> bool
(** [is_empty l] return true if [t] is empty *)
val first : t -> node option
(** [first l] return [Some n] if the list is non-empty. Otherwise, return [None] *)
val insert_first : t -> content -> node
(** [insert_first l c] inserts a new doubly linked list node [n] as the first
node in [l] with [c] as the content. Returns [n]. *)
val insert_after : node -> content -> node
(** [insert_after n c] inserts a new node [n'] with content [c] after the node [n].
Returns [n']. *)
val remove : t -> node -> unit
(** [remove l n] removes the node [n] from the list [l] *)
val iter : t -> (content -> unit) -> unit
(** [iter l f] applies [f] to each element of the list in the list order. *)
val iter_node : t -> (node -> unit) -> unit
(** [iter_node l f] applies [f] to each node of the list in the list order.
[f] may delete the current node or insert a new node after the current node.
In either case, [iter_node] is applied to the rest of the original list. *)
val string_of_t : t -> string
(** [string_of_t (t_of_list [1;2;3])] returns the string "[1->2->3]".
[string_of_t (t_of_list [])] returns the string "[]" *)
end
(* Implement the functor MakeList *)
module MakeList (N : DoublyLinkedListNode) : DoublyLinkedList with type content = N.content
and type node = N.t =
struct
(* YOUR CODE HERE *)
raise (Failure "Not implemented")
end
module L = MakeList(IntNode);;
let open L in
assert (is_empty (t_of_list []));
let l = t_of_list [1;2;3] in
assert (not @@ is_empty l);
let n = match first l with
| None -> failwith "impossible"
| Some n -> n
in
assert (1 = IntNode.get_content n);
L.(assign l (first (t_of_list [4;5;6])));
match first l with
| None -> failwith "impossible"
| Some n -> assert (IntNode.get_content n = 4)
let open L in
let l = t_of_list [1;2;3] in
let n1 = match first l with
| Some n1 -> n1
| None -> failwith "impossible"
in
let n2 = match IntNode.get_next n1 with
| Some n2 -> n2
| None -> failwith "impossible"
in
remove l n2;
let n3 = match IntNode.get_next n1 with
| Some n3 -> n3
| None -> failwith "impossible"
in
let _ = assert (IntNode.get_content n3 = 3) in
remove l n1;
match first l with
| None -> failwith "impossible"
| Some n3 -> assert (IntNode.get_content n3 = 3)
let open L in
let l = t_of_list [1;2;3] in
let s = ref 0 in
iter l (fun c -> s := !s + c);
assert (!s = 6);
iter_node l (fun n -> if IntNode.get_content n mod 2 == 0 then remove l n);
let n1 = match first l with
| None -> failwith "impossible"
| Some n -> n
in
assert (IntNode.get_content n1 = 1)
###Output
_____no_output_____
###Markdown
Problem 4Implement the functor```ocamlmodule MakeDllFunctions : functor (N : DoublyLinkedListNode) -> Dll_functions with type node = N.t and type content = N.content and type dll = MakeList(N).t```where the module type `Dll_functions` is:
###Code
module type Dll_functions = sig
type dll
(** The type of doubly linked list *)
type node
(** The type of doubly linked list node *)
type content
(** The type of doubly linked list content *)
val length : dll -> int
(** Returns the length of the list.
Use [DLL.iter] to implement it. *)
val duplicate : dll -> unit
(** Given a doubly linked list [l] = [1->2], [duplicate l] returns [1->1->2->2].
Use [DLL.iter_node] to implement it. *)
val rotate : dll -> int -> unit
(** [rotate l n] rotates the list by [n] nodes.
If the list is [1->2->3->4->5], then [rotate l 0] will not modify the list.
[rotate l 2] will modify the list to be [3->4->5->1->2].
Assume that [0 <= n < length l]. *)
val reverse : dll -> unit
(** [reverse l] reverses the list *)
end
module MakeDllFunctions (N : DoublyLinkedListNode)
: Dll_functions with type node = N.t
and type content = N.content
and type dll = MakeList(N).t = struct
(* YOUR CODE HERE *)
raise (Failure "Not implemented")
end
let module F = MakeDllFunctions (IntNode) in
let module M = MakeList(IntNode) in
let open M in
let l = t_of_list [1;2;3] in
let _ = assert (F.length l = 3) in
F.duplicate l;
let _ = assert (F.length l = 6) in
()
let module F = MakeDllFunctions (IntNode) in
let module M = MakeList(IntNode) in
let open M in
let l = t_of_list [1;2;3] in
let _ = assert (F.length l = 3) in
F.duplicate l;
let _ = assert (F.length l = 6) in
let _ = assert ("[1->1->2->2->3->3]" = string_of_t l) in
F.rotate l 3;
let _ = assert ("[2->3->3->1->1->2]" = string_of_t l) in
F.rotate l 0;
let _ = assert ("[2->3->3->1->1->2]" = string_of_t l) in
let l = t_of_list [] in
F.rotate l 0;
let _ = assert ("[]" = string_of_t l) in
()
let module F = MakeDllFunctions (IntNode) in
let module M = MakeList(IntNode) in
let open M in
let l = t_of_list [1;2;3] in
F.reverse l;
assert ("[3->2->1]" = string_of_t l)
###Output
_____no_output_____
###Markdown
Hash TableImplement a hash table using [separate chaining](https://en.wikipedia.org/wiki/Hash_tableSeparate_chaining_with_linked_lists). Use an OCaml array for the array of buckets. The documentation of OCaml arrays is found in the [OCaml manual](https://caml.inria.fr/pub/docs/manual-ocaml/libref/Array.html).Each element in the array is a doubly linked list in order to allow chaining. Use your doubly linked list that you have implemented earlier. The hash table will be implemented as a functor that accepts two module arguments. The first one defines the type of the key, size of the array and the hash function. The signature of that module is:
###Code
module type Key = sig
type t
(** The type of key *)
val num_buckets : int
(** The number of buckets i.e, the size of the buckets array *)
val hash : t -> int
(** Hashes the key to an integer between [0, num_buckets) *)
val string_of_t : t -> string
(** Returns the string representation of key *)
end
###Output
_____no_output_____
###Markdown
The second functor argument defines the type of content and its signature is `Showable` that we had defined at the beginning of this assignment. The signature of the `Hash_table` module is:
###Code
module type Hash_table = sig
type key
(** The type of key *)
type content
(** The type of value *)
type t
(** The type of the hash table *)
val create : unit -> t
(** Creates a new hash table *)
val put : t -> key -> content -> unit
(** [put h k v] associates the key [k] with value [v] in the hash table [h].
If the binding for [k] already exists, overwrite it. *)
val get : t -> key -> content option
(** [get h k] returns [Some v], where [v] is the value associated with the
key [k] in the hash table [h]. Returns [None] if the [k] is not bound in [h]. *)
val remove : t -> key -> unit
(** [remove h k] removes the association for key [k] from the hash table [h] if it exists *)
val length : t -> int
(** Return the number of key-value pairs in the hash table *)
val string_of_t : t -> string
(** Returns a string version of the hash table. Can be in any format. Only for debugging. *)
val chain_length : t -> int -> int
(** [chain_length h bucket] returns the length of the doubly linked list
at the index [bucket] in the underlying array.
Assume that [0 <= bucket < length(bucket_array)] *)
end
###Output
_____no_output_____
###Markdown
Problem 5Implement the functor ```ocamlmodule Make_hash_table : functor (K : Key) (C : Showable) -> Hash_table with type key = K.t and type content = C.t```
###Code
(* Implement the functor Make_hash_table *)
(* YOUR CODE HERE *)
raise (Failure "Not implemented")
module K : Key with type t = int = struct
type t = int
let num_buckets = 64
let hash k = k mod num_buckets
let string_of_t = string_of_int
end
module StringShowable : Showable with type t = string = struct
type t = string
let string_of_t s = s
end
module H = Make_hash_table(K)(StringShowable)
open H
;;
let h = create () in
put h 0 "zero";
put h 1 "one";
assert (get h 0 = Some "zero");
assert (get h 1 = Some "one");
put h 0 "-zero-";
assert (get h 0 = Some "-zero-")
let h = create () in
put h 0 "zero";
put h 1 "one";
assert (get h 0 = Some "zero");
assert (get h 1 = Some "one");
put h 64 "-zero-";
assert (get h 0 = Some "zero");
assert (get h 64 = Some "-zero-");
assert (length h = 3);
assert (chain_length h 0 = 2);
assert (chain_length h 1 = 1)
###Output
_____no_output_____ |
book/tutorials/sar/swesarr.ipynb | ###Markdown
SWESARR Tutorial---Introduction Objectives: This is a 30-minute tutorial where we will ... Introduce SWESARR Briefly introduce active and passive microwave remote sensing Learn how to access, filter, and visualize SWESARR data SWESARR Tutorial Quick References SWESARR SAR Data Pre-release FTP Server SWESARR Radiometer Data, SnowEx20, v1 SWESARR Blogspot What is SWESARR?
###Code
from IPython.display import Audio,Image, YouTubeVideo; id='5hVQusosGSg'; YouTubeVideo(id=id,width=600,height=300,start=210,end=238)
# courtesy of this github post
# https://gist.github.com/christopherlovell/e3e70880c0b0ad666e7b5fe311320a62
###Output
_____no_output_____
###Markdown
Airborne sensor system measuring active and passive microwave measurements Colocated measurements are taken simultaneously using an ultra-wideband antennaSWESARR gives us insights on the different ways active and passive signals are influenced by snow over large areas. Active and Passive? Microwave Remote Sensing? Passive Systems* All materials can naturally emit electromagnetic waves* What is the cause?* Material above zero Kelvin will display some vibration or movement of particles* These moving, charged particles will induce electromagnetic waves* If we're careful, we can measure these waves with a radio wave measuring tool, or "radiometer"* Radiometers see emissions from many sources, but they're usually very weak* It's important to design a radiometer that (1) minimizes side lobes and (2) allows for averaging over the main beam* For this reason, radiometers often have low spatial resolution| ✏️ | Radiometers allow us to study earth materials through incoherent averaging of naturally emitted signals ||---------------|:----------------------------------------------------------------------------------------------------------| Active Systems* While radiometers generally measure natural electromagnetic waves, radars measure man-made electromagnetic waves* Transmit your own wave, and listen for the returns* The return of this signal is dependent on the surface and volume characteristics of the material it contacts| ✏️ | Synthetic aperture radar allows for high spatial resolution through processing of coherent signals ||---------------|:----------------------------------------------------------------------------------------------------------|
###Code
%%HTML
<style>
td { font-size: 15px }
th { font-size: 15px }
</style>
###Output
_____no_output_____
###Markdown
SWESARR SensorsSWESARR Frequencies, Polarization, and Bandwidth Specification | Center-Frequency (GHz) | Band | Sensor | Bandwidth (MHz) | Polarization || ---------------------- | ---------- | ------------ | --------------- | ------------ || 9.65 | X | SAR | 200 | VH and VV || 13.6 | Ku | SAR | 200 | VH and VV || 17.25 | Ku | SAR | 200 | VH and VV || 10.65 | X | Radiometer | 200 | H || 18.7 | K | Radiometer | 200 | H || 36.5 | Ka | Radiometer | 1,000 | H | SWESARR Instrument SWESARR Spatiotemporal Coverage* Currently, there are two primary dataset coverages * **2019**: 04 November through 06 November * **2020**: 10 February through 12 February* Below: radiometer coverage for all passes made between February 10 to February 12, 2020* SWESARR flights cover many snowpit locations over the Grand Mesa area as shown by the dots in blue Reading SWESARR Data- SWESARR's SAR data is organized with a common file naming convention for finding the time, location, and type of data- [Lets look at the prerelease data on its homepage](https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/)*** Accessing Data: SAR SAR Data Example
###Code
# Import several libraries.
# comments to the right could be useful for local installation on Windows.
from shapely import speedups # https://www.lfd.uci.edu/~gohlke/pythonlibs/
speedups.disable() # <-- handle a potential error in cartopy
# downloader library
import requests # !conda install -c anaconda requests
# raster manipulation libraries
import rasterio # https://www.lfd.uci.edu/~gohlke/pythonlibs/
from osgeo import gdal # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import cartopy.crs as ccrs # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import rioxarray as rxr # !conda install -c conda-forge rioxarray
import xarray as xr # !conda install -c conda-forge xarray dask netCDF4 bottleneck
# plotting tools
from matplotlib import pyplot # !conda install matplotlib
import datashader as ds # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import hvplot.xarray # !conda install hvplot
# append the subfolders of the current working directory to pythons path
import os
import sys
swesarr_subdirs = ["data", "util"]
tmp = [sys.path.append(os.getcwd() + "/swesarr/" + sd) for sd in swesarr_subdirs]
del tmp # suppress Jupyter notebook output, delete variable
from helper import gdal_corners, join_files, join_sar_radiom
###Output
_____no_output_____
###Markdown
Select your data
###Code
# select files to download
# SWESARR data website
source_repo = 'https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/'
# Example flight line
flight_line = 'GRMCT2_31801_20007_016_200211_225_XX_01/'
# SAR files within this folder
data_files = [
'GRMCT2_31801_20007_016_200211_09225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_09225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VH_XX_01.tif'
]
# store the location of the SAR tiles as they're located on the SWESARR data server
remote_tiles = [source_repo + flight_line + d for d in data_files]
# create local output data directory
output_dir = '/tmp/swesarr/data/'
try:
os.makedirs(output_dir)
except FileExistsError:
print('output directory prepared!')
# store individual TIF files locally on our computer / server
output_paths = [output_dir + d for d in data_files]
###Output
_____no_output_____
###Markdown
Download SAR data and place into data folder
###Code
## for each file selected, store the data locally
##
## only run this block if you want to store data on the current
## server/hard drive this notebook is located.
##
################################################################
for remote_tile, output_path in zip(remote_tiles, output_paths):
# download data
r = requests.get(remote_tile)
# Store data (~= 65 MB/file)
if r.status_code == 200:
with open(output_path, 'wb') as f:
f.write(r.content)
###Output
_____no_output_____
###Markdown
Merge SAR datasets into single xarray file
###Code
da = join_files(output_paths)
da
###Output
_____no_output_____
###Markdown
Plot data with hvplot
###Code
# Set clim directly:
clim=(-20,20)
cmap='gray'
crs = ccrs.UTM(zone='12n')
tiles='EsriImagery'
da.hvplot.image(x='x',y='y',groupby='band',cmap=cmap,clim=clim,rasterize=True,
xlabel='Longitude',ylabel='Latitude',
frame_height=500, frame_width=500,
xformatter='%.1f',yformatter='%.1f', crs=crs, tiles=tiles, alpha=0.8)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR SAR dataset ! | 🎉 || :--- | :--- | :--- | Radiometer Data Example * SWESARR's radiometer data is publicly available at NSIDC* [Radiometer Data v1 Available Here](https://nsidc.org/data/SNEX20_SWESARR_TB/versions/1)
###Code
import pandas as pd # !conda install pandas
import numpy as np # !conda install numpy
import xarray as xr # !conda install -c anaconda xarray
import hvplot # !conda install hvplot
import hvplot.pandas
import holoviews as hv # !conda install -c conda-forge holoviews
from holoviews.operation.datashader import datashade
from geopy.distance import distance #!conda install -c conda-forge geopy
###Output
_____no_output_____
###Markdown
Downloading SWESARR Radiometer Data with `wget`* If you are running this on the SnowEx Hackweek server, `wget` should be configured.* If you are using this tutorial on your local machine, you'll need `wget`. * Linux Users - You should be fine. This is likely baked into your operating systems. Congratulations! You chose correctly. * Apple Users - The author of this textbox has never used a Mac. There are many command-line suggestions online. `sudo brew install wget`, `sudo port install wget`, etc. Try searching online! * Windows Users - [Check out this tutorial, page 2](https://blogs.nasa.gov/swesarr/wp-content/uploads/sites/305/2020/10/how_to_download_SWESARR_radar_data.pdf) You'll need to download binaries for `wget`, and you should really make it an environment variable! Be sure to be diligent before installing anything to your computer. Regardless, fill in your NASA Earthdata Login credentials and follow along!
###Code
!wget --quiet https://n5eil01u.ecs.nsidc.org/SNOWEX/SNEX20_SWESARR_TB.001/2020.02.11/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKKa225H_v01.csv -O {output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv
###Output
_____no_output_____
###Markdown
Select an example radiometer data file
###Code
# use the file we downloaded with wget above
excel_path = f'{output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv'
# read data
radiom = pd.read_csv(excel_path)
###Output
_____no_output_____
###Markdown
Lets examine the radiometer data files content
###Code
radiom.hvplot.table(width=1100)
###Output
_____no_output_____
###Markdown
Plot radiometer data with hvplot
###Code
# create several series from pandas dataframe
lon_ser = pd.Series( radiom['Longitude (deg)'].to_list() * (3) )
lat_ser = pd.Series( radiom['Latitude (deg)'].to_list() * (3) )
tb_ser = pd.Series(
radiom['TB X (K)'].to_list() + radiom['TB K (K)'].to_list() +
radiom['TB Ka (K)'].to_list(), name="Tb"
)
# get series length, create IDs for plotting
sl = len(radiom['TB X (K)'])
id_ser = pd.Series(
['X-band']*sl + ['K-band']*sl + ['Ka-band']*sl, name="ID"
)
frame = {'Longitude (deg)' : lon_ser, 'Latitude (deg)' : lat_ser,
'TB' : tb_ser, 'ID' : id_ser}
radiom_p = pd.DataFrame(frame)
del sl, lon_ser, lat_ser, tb_ser, id_ser, frame
radiom_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='TB', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR radiometer dataset ! | 🎉 || :--- | :--- | :--- | SAR and Radiometer Together * The novelty of SWESARR lies in its colocated SAR and radiometer systems* Lets try filtering the SAR dataset and plotting both datasets together* For this session, I've made the code a function in {download}`swesarr/util/helper.py `
###Code
data_p, data_ser = join_sar_radiom(da, radiom)
data_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='Measurements', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
Exercise Exercise: Plot a time-series visualization of the filtered SAR channels from the output of the join_sar_radiom() function Plot a time-series visualization of the radiometer channels from the output of the join_sar_radiom() function Hint: the data series variable ( data_ser ) is a pandas data series. Use some of the methods shown above to read and plot the data!
###Code
### Your Code Here #############################################################################################################
#
# Two of Many Options:
# 1.) Go the matplotlib route
# a.) Further reading below:
# https://matplotlib.org/stable/tutorials/introductory/pyplot.html
#
# 2.) Try using hvplot tools if you like
# a.) Further reading below:
# https://hvplot.holoviz.org/user_guide/Plotting.html
#
# Remember, if you don't use a library all of the time, you'll end up <search engine of your choice>-ing it. Go crazy!
#
################################################################################################################################
# configure some inline parameters to make things pretty / readable if you'd like to go with matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16, 9) # (w, h)
###Output
_____no_output_____
###Markdown
SWESARR Tutorial---Introduction Objectives: This is a 30-minute tutorial where we will ... Introduce SWESARR Briefly introduce active and passive microwave remote sensing Learn how to access, filter, and visualize SWESARR data SWESARR Tutorial Quick References SWESARR SAR Data Pre-release FTP Server SWESARR Radiometer Data, SnowEx20, v1 SWESARR Blogspot What is SWESARR?
###Code
from IPython.display import Audio,Image, YouTubeVideo; id='5hVQusosGSg'; YouTubeVideo(id=id,width=600,height=300,start=210,end=238)
# courtesy of this github post
# https://gist.github.com/christopherlovell/e3e70880c0b0ad666e7b5fe311320a62
###Output
_____no_output_____
###Markdown
Airborne sensor system measuring active and passive microwave measurements Colocated measurements are taken simultaneously using an ultra-wideband antennaSWESARR gives us insights on the different ways active and passive signals are influenced by snow over large areas. Active and Passive? Microwave Remote Sensing? Passive Systems* All materials can naturally emit electromagnetic waves* What is the cause?* Material above zero Kelvin will display some vibration or movement of particles* These moving, charged particles will induce electromagnetic waves* If we're careful, we can measure these waves with a radio wave measuring tool, or "radiometer"* Radiometers see emissions from many sources, but they're usually very weak* It's important to design a radiometer that (1) minimizes side lobes and (2) allows for averaging over the main beam* For this reason, radiometers often have low spatial resolution| ✏️ | Radiometers allow us to study earth materials through incoherent averaging of naturally emitted signals ||---------------|:----------------------------------------------------------------------------------------------------------| Active Systems* While radiometers generally measure natural electromagnetic waves, radars measure man-made electromagnetic waves* Transmit your own wave, and listen for the returns* The return of this signal is dependent on the surface and volume characteristics of the material it contacts| ✏️ | Synthetic aperture radar allows for high spatial resolution through processing of coherent signals ||---------------|:----------------------------------------------------------------------------------------------------------|
###Code
%%HTML
<style>
td { font-size: 15px }
th { font-size: 15px }
</style>
###Output
_____no_output_____
###Markdown
SWESARR SensorsSWESARR Frequencies, Polarization, and Bandwidth Specification | Center-Frequency (GHz) | Band | Sensor | Bandwidth (MHz) | Polarization || ---------------------- | ---------- | ------------ | --------------- | ------------ || 9.65 | X | SAR | 200 | VH and VV || 13.6 | Ku | SAR | 200 | VH and VV || 17.25 | Ku | SAR | 200 | VH and VV || 10.65 | X | Radiometer | 200 | H || 18.7 | K | Radiometer | 200 | H || 36.5 | Ka | Radiometer | 1,000 | H | SWESARR Instrument SWESARR Spatiotemporal Coverage* Currently, there are two primary dataset coverages * **2019**: 04 November through 06 November * **2020**: 10 February through 12 February* Below: radiometer coverage for all passes made between February 10 to February 12, 2020* SWESARR flights cover many snowpit locations over the Grand Mesa area as shown by the dots in blue Reading SWESARR Data- SWESARR's SAR data is organized with a common file naming convention for finding the time, location, and type of data- [Lets look at the prerelease data on its homepage](https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/)*** Accessing Data: SAR SAR Data Example
###Code
# Import several libraries.
# comments to the right could be useful for local installation on Windows.
from shapely import speedups # https://www.lfd.uci.edu/~gohlke/pythonlibs/
speedups.disable() # <-- handle a potential error in cartopy
# downloader library
import requests # !conda install -c anaconda requests
# raster manipulation libraries
import rasterio # https://www.lfd.uci.edu/~gohlke/pythonlibs/
from osgeo import gdal # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import cartopy.crs as ccrs # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import rioxarray as rxr # !conda install -c conda-forge rioxarray
import xarray as xr # !conda install -c conda-forge xarray dask netCDF4 bottleneck
# plotting tools
from matplotlib import pyplot # !conda install matplotlib
import datashader as ds # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import hvplot.xarray # !conda install hvplot
# append the subfolders of the current working directory to pythons path
import os
import sys
sys.path.append("./swesarr/util")
from helper import gdal_corners, join_files, join_sar_radiom
%%bash
# Retrieve a copy of data files used in this tutorial from Zenodo.org:
# Re-running this cell will not re-download things if they already exist
mkdir -p /tmp/tutorial-data
cd /tmp/tutorial-data
wget -q -nc -O data.zip https://zenodo.org/record/5504396/files/sar.zip
unzip -q -n data.zip
rm data.zip
###Output
_____no_output_____
###Markdown
Select your data
###Code
TUTORIAL_DATA = '/tmp/tutorial-data/sar/swesarr/'
# SWESARR data website
source_repo = 'https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/'
# Example flight line
flight_line = 'GRMCT2_31801_20007_016_200211_225_XX_01/'
# SAR files within this folder
data_files = [
'GRMCT2_31801_20007_016_200211_09225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_09225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VH_XX_01.tif'
]
# store the location of the SAR tiles as they're located on the SWESARR data server
remote_tiles = [source_repo + flight_line + d for d in data_files]
# store individual TIF files locally on our computer / server
output_paths = [TUTORIAL_DATA + d for d in data_files]
###Output
_____no_output_____
###Markdown
Download SAR data and place into data folder
###Code
if not os.path.exists(TUTORIAL_DATA):
## for each file selected, store the data locally
##
## only run this block if you want to store data on the current
## server/hard drive this notebook is located.
##
################################################################
for remote_tile, output_path in zip(remote_tiles, output_paths):
# download data
r = requests.get(remote_tile)
# Store data (~= 65 MB/file)
if r.status_code == 200:
with open(output_path, 'wb') as f:
f.write(r.content)
###Output
_____no_output_____
###Markdown
Merge SAR datasets into single xarray file
###Code
da = join_files(output_paths)
da
###Output
_____no_output_____
###Markdown
Plot data with hvplot
###Code
# Set clim directly:
clim=(-20,20)
cmap='gray'
crs = ccrs.UTM(zone='12n')
tiles='EsriImagery'
da.hvplot.image(x='x',y='y',groupby='band',cmap=cmap,clim=clim,rasterize=True,
xlabel='Longitude',ylabel='Latitude',
frame_height=500, frame_width=500,
xformatter='%.1f',yformatter='%.1f', crs=crs, tiles=tiles, alpha=0.8)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR SAR dataset ! | 🎉 || :--- | :--- | :--- | Radiometer Data Example * SWESARR's radiometer data is publicly available at NSIDC* [Radiometer Data v1 Available Here](https://nsidc.org/data/SNEX20_SWESARR_TB/versions/1)
###Code
import pandas as pd # !conda install pandas
import numpy as np # !conda install numpy
import xarray as xr # !conda install -c anaconda xarray
import hvplot # !conda install hvplot
import hvplot.pandas
import holoviews as hv # !conda install -c conda-forge holoviews
from holoviews.operation.datashader import datashade
from geopy.distance import distance #!conda install -c conda-forge geopy
###Output
_____no_output_____
###Markdown
Downloading SWESARR Radiometer Data with `wget`* If you are running this on the SnowEx Hackweek server, `wget` should be configured.* If you are using this tutorial on your local machine, you'll need `wget`. * Linux Users - You should be fine. This is likely baked into your operating systems. Congratulations! You chose correctly. * Apple Users - The author of this textbox has never used a Mac. There are many command-line suggestions online. `sudo brew install wget`, `sudo port install wget`, etc. Try searching online! * Windows Users - [Check out this tutorial, page 2](https://blogs.nasa.gov/swesarr/wp-content/uploads/sites/305/2020/10/how_to_download_SWESARR_radar_data.pdf) You'll need to download binaries for `wget`, and you should really make it an environment variable! Be sure to be diligent before installing anything to your computer. Regardless, fill in your NASA Earthdata Login credentials and follow along!
###Code
# To get orginial data from NSIDC
#!wget --quiet https://n5eil01u.ecs.nsidc.org/SNOWEX/SNEX20_SWESARR_TB.001/2020.02.11/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKKa225H_v01.csv -O {output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv
###Output
_____no_output_____
###Markdown
Select an example radiometer data file
###Code
# use the file we downloaded with wget above
excel_path = f'{TUTORIAL_DATA}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv'
# read data
radiom = pd.read_csv(excel_path)
###Output
_____no_output_____
###Markdown
Lets examine the radiometer data files content
###Code
radiom.hvplot.table(width=1100)
###Output
_____no_output_____
###Markdown
Plot radiometer data with hvplot
###Code
# create several series from pandas dataframe
lon_ser = pd.Series( radiom['Longitude (deg)'].to_list() * (3) )
lat_ser = pd.Series( radiom['Latitude (deg)'].to_list() * (3) )
tb_ser = pd.Series(
radiom['TB X (K)'].to_list() + radiom['TB K (K)'].to_list() +
radiom['TB Ka (K)'].to_list(), name="Tb"
)
# get series length, create IDs for plotting
sl = len(radiom['TB X (K)'])
id_ser = pd.Series(
['X-band']*sl + ['K-band']*sl + ['Ka-band']*sl, name="ID"
)
frame = {'Longitude (deg)' : lon_ser, 'Latitude (deg)' : lat_ser,
'TB' : tb_ser, 'ID' : id_ser}
radiom_p = pd.DataFrame(frame)
del sl, lon_ser, lat_ser, tb_ser, id_ser, frame
radiom_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='TB', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR radiometer dataset ! | 🎉 || :--- | :--- | :--- | SAR and Radiometer Together * The novelty of SWESARR lies in its colocated SAR and radiometer systems* Lets try filtering the SAR dataset and plotting both datasets together* For this session, I've made the code a function in {download}`swesarr/util/helper.py `
###Code
data_p, data_ser = join_sar_radiom(da, radiom)
data_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='Measurements', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
Exercise Exercise: Plot a time-series visualization of the filtered SAR channels from the output of the join_sar_radiom() function Plot a time-series visualization of the radiometer channels from the output of the join_sar_radiom() function Hint: the data series variable ( data_ser ) is a pandas data series. Use some of the methods shown above to read and plot the data!
###Code
### Your Code Here #############################################################################################################
#
# Two of Many Options:
# 1.) Go the matplotlib route
# a.) Further reading below:
# https://matplotlib.org/stable/tutorials/introductory/pyplot.html
#
# 2.) Try using hvplot tools if you like
# a.) Further reading below:
# https://hvplot.holoviz.org/user_guide/Plotting.html
#
# Remember, if you don't use a library all of the time, you'll end up <search engine of your choice>-ing it. Go crazy!
#
################################################################################################################################
# configure some inline parameters to make things pretty / readable if you'd like to go with matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16, 9) # (w, h)
###Output
_____no_output_____
###Markdown
SWESARR Tutorial---Introduction Objectives: This is a 30-minute tutorial where we will ... Introduce SWESARR Briefly introduce active and passive microwave remote sensing Learn how to access, filter, and visualize SWESARR data SWESARR Tutorial Quick References SWESARR SAR Data Pre-release FTP Server SWESARR Radiometer Data, SnowEx20, v1 SWESARR Blogspot What is SWESARR?
###Code
from IPython.display import Audio,Image, YouTubeVideo; id='5hVQusosGSg'; YouTubeVideo(id=id,width=600,height=300,start=210,end=238)
# courtesy of this github post
# https://gist.github.com/christopherlovell/e3e70880c0b0ad666e7b5fe311320a62
###Output
_____no_output_____
###Markdown
Airborne sensor system measuring active and passive microwave measurements Colocated measurements are taken simultaneously using an ultra-wideband antennaSWESARR gives us insights on the different ways active and passive signals are influenced by snow over large areas. Active and Passive? Microwave Remote Sensing? Passive Systems* All materials can naturally emit electromagnetic waves* What is the cause?* Material above zero Kelvin will display some vibration or movement of particles* These moving, charged particles will induce electromagnetic waves* If we're careful, we can measure these waves with a radio wave measuring tool, or "radiometer"* Radiometers see emissions from many sources, but they're usually very weak* It's important to design a radiometer that (1) minimizes side lobes and (2) allows for averaging over the main beam* For this reason, radiometers often have low spatial resolution| ✏️ | Radiometers allow us to study earth materials through incoherent averaging of naturally emitted signals ||---------------|:----------------------------------------------------------------------------------------------------------| Active Systems* While radiometers generally measure natural electromagnetic waves, radars measure man-made electromagnetic waves* Transmit your own wave, and listen for the returns* The return of this signal is dependent on the surface and volume characteristics of the material it contacts| ✏️ | Synthetic aperture radar allows for high spatial resolution through processing of coherent signals ||---------------|:----------------------------------------------------------------------------------------------------------|
###Code
%%HTML
<style>
td { font-size: 15px }
th { font-size: 15px }
</style>
###Output
_____no_output_____
###Markdown
SWESARR SensorsSWESARR Frequencies, Polarization, and Bandwidth Specification | Center-Frequency (GHz) | Band | Sensor | Bandwidth (MHz) | Polarization || ---------------------- | ---------- | ------------ | --------------- | ------------ || 9.65 | X | SAR | 200 | VH and VV || 13.6 | Ku | SAR | 200 | VH and VV || 17.25 | Ku | SAR | 200 | VH and VV || 10.65 | X | Radiometer | 200 | H || 18.7 | K | Radiometer | 200 | H || 36.5 | Ka | Radiometer | 1,000 | H | SWESARR Instrument SWESARR Spatiotemporal Coverage* Currently, there are two primary dataset coverages * **2019**: 04 November through 06 November * **2020**: 10 February through 12 February* Below: radiometer coverage for all passes made between February 10 to February 12, 2020* SWESARR flights cover many snowpit locations over the Grand Mesa area as shown by the dots in blue Reading SWESARR Data- SWESARR's SAR data is organized with a common file naming convention for finding the time, location, and type of data- [Lets look at the prerelease data on its homepage](https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/)*** Accessing Data: SAR SAR Data Example
###Code
# Import several libraries.
# comments to the right could be useful for local installation on Windows.
from shapely import speedups # https://www.lfd.uci.edu/~gohlke/pythonlibs/
speedups.disable() # <-- handle a potential error in cartopy
# downloader library
import requests # !conda install -c anaconda requests
# raster manipulation libraries
import rasterio # https://www.lfd.uci.edu/~gohlke/pythonlibs/
from osgeo import gdal # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import cartopy.crs as ccrs # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import rioxarray as rxr # !conda install -c conda-forge rioxarray
import xarray as xr # !conda install -c conda-forge xarray dask netCDF4 bottleneck
# plotting tools
from matplotlib import pyplot # !conda install matplotlib
import datashader as ds # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import hvplot.xarray # !conda install hvplot
# append the subfolders of the current working directory to pythons path
import os
import sys
swesarr_subdirs = ["data", "util"]
tmp = [sys.path.append(os.getcwd() + "/swesarr/" + sd) for sd in swesarr_subdirs]
del tmp # suppress Jupyter notebook output, delete variable
from helper import gdal_corners, join_files, join_sar_radiom
###Output
_____no_output_____
###Markdown
Select your data
###Code
# select files to download
# SWESARR data website
source_repo = 'https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/'
# Example flight line
flight_line = 'GRMCT2_31801_20007_016_200211_225_XX_01/'
# SAR files within this folder
data_files = [
'GRMCT2_31801_20007_016_200211_09225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_09225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VH_XX_01.tif'
]
# store the location of the SAR tiles as they're located on the SWESARR data server
remote_tiles = [source_repo + flight_line + d for d in data_files]
# create local output data directory
output_dir = '/tmp/swesarr/data/'
try:
os.makedirs(output_dir)
except FileExistsError:
print('output directory prepared!')
# store individual TIF files locally on our computer / server
output_paths = [output_dir + d for d in data_files]
###Output
_____no_output_____
###Markdown
Download SAR data and place into data folder
###Code
## for each file selected, store the data locally
##
## only run this block if you want to store data on the current
## server/hard drive this notebook is located.
##
################################################################
for remote_tile, output_path in zip(remote_tiles, output_paths):
# download data
r = requests.get(remote_tile)
# Store data (~= 65 MB/file)
if r.status_code == 200:
with open(output_path, 'wb') as f:
f.write(r.content)
###Output
_____no_output_____
###Markdown
Merge SAR datasets into single xarray file
###Code
da = join_files(output_paths)
da
###Output
_____no_output_____
###Markdown
Plot data with hvplot
###Code
# Set clim directly:
clim=(-20,20)
cmap='gray'
crs = ccrs.UTM(zone='12n')
tiles='EsriImagery'
da.hvplot.image(x='x',y='y',groupby='band',cmap=cmap,clim=clim,rasterize=True,
xlabel='Longitude',ylabel='Latitude',
frame_height=500, frame_width=500,
xformatter='%.1f',yformatter='%.1f', crs=crs, tiles=tiles, alpha=0.8)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR SAR dataset ! | 🎉 || :--- | :--- | :--- | Radiometer Data Example * SWESARR's radiometer data is publicly available at NSIDC* [Radiometer Data v1 Available Here](https://nsidc.org/data/SNEX20_SWESARR_TB/versions/1)
###Code
import pandas as pd # !conda install pandas
import numpy as np # !conda install numpy
import xarray as xr # !conda install -c anaconda xarray
import hvplot # !conda install hvplot
import hvplot.pandas
import holoviews as hv # !conda install -c conda-forge holoviews
from holoviews.operation.datashader import datashade
from geopy.distance import distance #!conda install -c conda-forge geopy
###Output
_____no_output_____
###Markdown
Downloading SWESARR Radiometer Data with `wget`* If you are running this on the SnowEx Hackweek server, `wget` should be configured.* If you are using this tutorial on your local machine, you'll need `wget`. * Linux Users - You should be fine. This is likely baked into your operating systems. Congratulations! You chose correctly. * Apple Users - The author of this textbox has never used a Mac. There are many command-line suggestions online. `sudo brew install wget`, `sudo port install wget`, etc. Try searching online! * Windows Users - [Check out this tutorial, page 2](https://blogs.nasa.gov/swesarr/wp-content/uploads/sites/305/2020/10/how_to_download_SWESARR_radar_data.pdf) You'll need to download binaries for `wget`, and you should really make it an environment variable! Be sure to be diligent before installing anything to your computer. Regardless, fill in your NASA Earthdata Login credentials and follow along!
###Code
!wget --quiet https://n5eil01u.ecs.nsidc.org/SNOWEX/SNEX20_SWESARR_TB.001/2020.02.11/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKKa225H_v01.csv -O {output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv
###Output
_____no_output_____
###Markdown
Select an example radiometer data file
###Code
# use the file we downloaded with wget above
excel_path = f'{output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv'
# read data
radiom = pd.read_csv(excel_path)
###Output
_____no_output_____
###Markdown
Lets examine the radiometer data files content
###Code
radiom.hvplot.table(width=1100)
###Output
_____no_output_____
###Markdown
Plot radiometer data with hvplot
###Code
# create several series from pandas dataframe
lon_ser = pd.Series( radiom['Longitude (deg)'].to_list() * (3) )
lat_ser = pd.Series( radiom['Latitude (deg)'].to_list() * (3) )
tb_ser = pd.Series(
radiom['TB X (K)'].to_list() + radiom['TB K (K)'].to_list() +
radiom['TB Ka (K)'].to_list(), name="Tb"
)
# get series length, create IDs for plotting
sl = len(radiom['TB X (K)'])
id_ser = pd.Series(
['X-band']*sl + ['K-band']*sl + ['Ka-band']*sl, name="ID"
)
frame = {'Longitude (deg)' : lon_ser, 'Latitude (deg)' : lat_ser,
'TB' : tb_ser, 'ID' : id_ser}
radiom_p = pd.DataFrame(frame)
del sl, lon_ser, lat_ser, tb_ser, id_ser, frame
radiom_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='TB', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR radiometer dataset ! | 🎉 || :--- | :--- | :--- | SAR and Radiometer Together * The novelty of SWESARR lies in its colocated SAR and radiometer systems* Lets try filtering the SAR dataset and plotting both datasets together* For this session, I've made the code a function. We can look at it together by clicking __[here](swesarr/util/helper.py)__
###Code
data_p, data_ser = join_sar_radiom(da, radiom)
data_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='Measurements', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
Exercise Exercise: Plot a time-series visualization of the filtered SAR channels from the output of the join_sar_radiom() function Plot a time-series visualization of the radiometer channels from the output of the join_sar_radiom() function Hint: the data series variable ( data_ser ) is a pandas data series. Use some of the methods shown above to read and plot the data!
###Code
### Your Code Here #############################################################################################################
#
# Two of Many Options:
# 1.) Go the matplotlib route
# a.) Further reading below:
# https://matplotlib.org/stable/tutorials/introductory/pyplot.html
#
# 2.) Try using hvplot tools if you like
# a.) Further reading below:
# https://hvplot.holoviz.org/user_guide/Plotting.html
#
# Remember, if you don't use a library all of the time, you'll end up <search engine of your choice>-ing it. Go crazy!
#
################################################################################################################################
# configure some inline parameters to make things pretty / readable if you'd like to go with matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16, 9) # (w, h)
###Output
_____no_output_____
###Markdown
SWESARR Tutorial---Introduction Objectives: This is a 30-minute tutorial where we will ... Introduce SWESARR Briefly introduce active and passive microwave remote sensing Learn how to access, filter, and visualize SWESARR data SWESARR Tutorial Quick References SWESARR SAR Data Pre-release FTP Server SWESARR Radiometer Data, SnowEx20, v1 NSIDC Datasets SWESARR Blogspot What is SWESARR? Airborne sensor system measuring active and passive microwave measurements Colocated measurements are taken simultaneously using an ultra-wideband antennaSWESARR gives us insights on the different ways active and passive signals are influenced by snow over large areas. Active and Passive? Microwave Remote Sensing? Passive Systems* All materials can naturally emit electromagnetic waves* What is the cause?* Material above zero Kelvin will display some vibration or movement of particles* These moving, charged particles will induce electromagnetic waves* If we're careful, we can measure these waves with a radio wave measuring tool, or "radiometer"* Radiometers see emissions from many sources, but they're usually very weak* It's important to design a radiometer that (1) minimizes side lobes and (2) allows for averaging over the main beam* For this reason, radiometers often have low spatial resolution| ✏️ | Radiometers allow us to study earth materials through incoherent averaging of naturally emitted signals ||---------------|:----------------------------------------------------------------------------------------------------------| Active Systems* While radiometers generally measure natural electromagnetic waves, radars measure man-made electromagnetic waves* Transmit your own wave, and listen for the returns* The return of this signal is dependent on the surface and volume characteristics of the material it contacts| ✏️ | Synthetic aperture radar allows for high spatial resolution through processing of coherent signals ||---------------|:----------------------------------------------------------------------------------------------------------|
###Code
%%HTML
<style>
td { font-size: 15px }
th { font-size: 15px }
</style>
###Output
_____no_output_____
###Markdown
SWESARR SensorsSWESARR Frequencies, Polarization, and Bandwidth Specification | Center-Frequency (GHz) | Band | Sensor | Bandwidth (MHz) | Polarization || ---------------------- | ---------- | ------------ | --------------- | ------------ || 9.65 | X | SAR | 200 | VH and VV || 10.65 | X | Radiometer | 200 | H || 13.6 | Ku | SAR | 200 | VH and VV || 17.25 | Ku | SAR | 200 | VH and VV || 18.7 | K | Radiometer | 200 | H || 36.5 | Ka | Radiometer | 1,000 | H | SWESARR Coverage* Below: radiometer coverage for all passes made between February 10 to February 12, 2020* SWESARR flights cover many snowpit locations over the Grand Mesa area as shown by the dots in blue Reading SWESARR Data- SWESARR's SAR data is organized with a common file naming convention for finding the time, location, and type of data- [Lets look at the prerelease data on its homepage](https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/)*** Accessing Data: SAR SAR Data Example
###Code
# Import several libraries.
# comments to the right could be useful for local installation on Windows.
from shapely import speedups # https://www.lfd.uci.edu/~gohlke/pythonlibs/
speedups.disable() # <-- handle a potential error in cartopy
import requests # !conda install -c anaconda requests
# raster manipulation libraries
import rasterio # https://www.lfd.uci.edu/~gohlke/pythonlibs/
from osgeo import gdal # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import cartopy.crs as ccrs # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import rioxarray as rxr # !conda install -c conda-forge rioxarray
import xarray as xr # !conda install -c conda-forge xarray dask netCDF4 bottleneck
# plotting tools
from matplotlib import pyplot # !conda install matplotlib
import datashader as ds # https://www.lfd.uci.edu/~gohlke/pythonlibs/
import hvplot.xarray # !conda install hvplot
# append the subfolders of the current working directory to pythons path
import os
import sys
swesarr_subdirs = ["data", "util"]
tmp = [sys.path.append(os.getcwd() + "/swesarr/" + sd) for sd in swesarr_subdirs]
del tmp # suppress Jupyter notebook output, delete variable
from helper import gdal_corners, join_files, join_sar_radiom
###Output
_____no_output_____
###Markdown
Select your data
###Code
# select files to download
# SWESARR data website
source_repo = 'https://glihtdata.gsfc.nasa.gov/files/radar/SWESARR/prerelease/'
# Example flight line
flight_line = 'GRMCT2_31801_20007_016_200211_225_XX_01/'
# SAR files within this folder
data_files = [
'GRMCT2_31801_20007_016_200211_09225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_09225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_13225VH_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VV_XX_01.tif',
'GRMCT2_31801_20007_016_200211_17225VH_XX_01.tif'
]
# store the location of the SAR tiles as they're located on the SWESARR data server
remote_tiles = [source_repo + flight_line + d for d in data_files]
# create local output data directory
output_dir = '/tmp/swesarr/data/'
try:
os.makedirs(output_dir)
except FileExistsError:
print('output directory prepared!')
# store individual TIF files locally on our computer / server
output_paths = [output_dir + d for d in data_files]
###Output
_____no_output_____
###Markdown
Download SAR data and place into data folder
###Code
## for each file selected, store the data locally
##
## only run this block if you want to store data on the current
## server/hard drive this notebook is located.
##
################################################################
for remote_tile, output_path in zip(remote_tiles, output_paths):
# download data
r = requests.get(remote_tile)
# Store data (~= 65 MB/file)
if r.status_code == 200:
with open(output_path, 'wb') as f:
f.write(r.content)
###Output
_____no_output_____
###Markdown
Merge SAR datasets into single xarray file
###Code
da = join_files(output_paths)
da
###Output
_____no_output_____
###Markdown
Plot data with hvplot
###Code
# Set clim directly:
clim=(-20,20)
cmap='gray'
crs = ccrs.UTM(zone='12n')
tiles='OSM'
tiles='EsriImagery'
da.hvplot.image(x='x',y='y',groupby='band',cmap=cmap,clim=clim,rasterize=True,
xlabel='Longitude',ylabel='Latitude',
frame_height=500, frame_width=500,
xformatter='%.1f',yformatter='%.1f', crs=crs, tiles=tiles, alpha=0.8)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR SAR dataset ! | 🎉 || :--- | :--- | :--- | Radiometer Data Example * SWESARR's radiometer data is publicly available at NSIDC* [Radiometer Data v1 Available Here](https://nsidc.org/data/SNEX20_SWESARR_TB/versions/1)
###Code
import pandas as pd # !conda install pandas
import numpy as np # !conda install numpy
import xarray as xr # !conda install -c anaconda xarray
import hvplot # !conda install hvplot
import hvplot.pandas
import holoviews as hv # !conda install -c conda-forge holoviews
from holoviews.operation.datashader import datashade
from geopy.distance import distance #!conda install -c conda-forge geopy
###Output
_____no_output_____
###Markdown
Downloading SWESARR Radiometer Data with `wget`* If you are running this on the SnowEx Hackweek server, `wget` should be configured.* If you are using this tutorial on your local machine, you'll need `wget`. * Linux Users - You should be fine. This is likely baked into your operating systems. Congratulations! You chose correctly. * Apple Users - The author of this textbox has never used a Mac. There are many command-line suggestions online. `sudo brew install wget`, `sudo port install wget`, etc. Try searching online! * Windows Users - [Check out this tutorial, page 2](https://blogs.nasa.gov/swesarr/wp-content/uploads/sites/305/2020/10/how_to_download_SWESARR_radar_data.pdf) You'll need to download binaries for `wget`, and you should really make it an environment variable! Be sure to be diligent before installing anything to your computer. Regardless, fill in your NASA Earthdata Login credentials and follow along!
###Code
!wget --quiet https://n5eil01u.ecs.nsidc.org/SNOWEX/SNEX20_SWESARR_TB.001/2020.02.11/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKKa225H_v01.csv -O {output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv
###Output
_____no_output_____
###Markdown
Select an example radiometer data file
###Code
# use the file we downloaded with wget above
excel_path = f'{output_dir}/SNEX20_SWESARR_TB_GRMCT2_13801_20007_000_200211_XKuKa225H_v03.csv'
# read data
radiom = pd.read_csv(excel_path)
###Output
_____no_output_____
###Markdown
Lets examine the radiometer data files content
###Code
radiom.hvplot.table(width=1100)
###Output
_____no_output_____
###Markdown
Plot radiometer data with hvplot
###Code
# create several series from pandas dataframe
lon_ser = pd.Series( radiom['Longitude (deg)'].to_list() * (3) )
lat_ser = pd.Series( radiom['Latitude (deg)'].to_list() * (3) )
tb_ser = pd.Series(
radiom['TB X (K)'].to_list() + radiom['TB K (K)'].to_list() +
radiom['TB Ka (K)'].to_list(), name="Tb"
)
# get series length, create IDs for plotting
sl = len(radiom['TB X (K)'])
id_ser = pd.Series(
['X-band']*sl + ['K-band']*sl + ['Ka-band']*sl, name="ID"
)
frame = {'Longitude (deg)' : lon_ser, 'Latitude (deg)' : lat_ser,
'TB' : tb_ser, 'ID' : id_ser}
radiom_p = pd.DataFrame(frame)
del sl, lon_ser, lat_ser, tb_ser, id_ser, frame
radiom_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='TB', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
| 🎉 | Congratulations! You now know how to download and display a SWESARR radiometer dataset ! | 🎉 || :--- | :--- | :--- | SAR and Radiometer Together * The novelty of SWESARR lies in its colocated SAR and radiometer systems* Lets try filtering the SAR dataset and plotting both datasets together* For this session, I've made the code a function. We can look at it together by clicking __[here](swesarr/util/helper.py)__
###Code
data_p, data_ser = join_sar_radiom(da, radiom)
data_p.hvplot.points('Longitude (deg)', 'Latitude (deg)', groupby='ID', geo=True, color='Measurements', alpha=1,
tiles='ESRI', height=400, width=500)
###Output
_____no_output_____
###Markdown
Exercise Exercise: Plot a time-series visualization of the filtered SAR channels from the output of the join_sar_radiom() function Plot a time-series visualization of the radiometer channels from the output of the join_sar_radiom() function Hint: the data series variable ( data_ser ) is a pandas data series. Use some of the methods shown above to read and plot the data!
###Code
### Your Code Here #############################################################################################################
#
# Two of Many Options:
# 1.) Go the matplotlib route
# a.) Further reading below:
# https://matplotlib.org/stable/tutorials/introductory/pyplot.html
#
# 2.) Try using hvplot tools if you like
# a.) Further reading below:
# https://hvplot.holoviz.org/user_guide/Plotting.html
#
# Remember, if you don't use a library all of the time, you'll end up <search engine of your choice>-ing it. Go crazy!
#
################################################################################################################################
# configure some inline parameters to make things pretty / readable if you'd like to go with matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16, 9) # (w, h)
###Output
_____no_output_____ |
Analysis/6.7_DataAnalysis_AtkMdf_PCA.ipynb | ###Markdown
공격수 데이터 분석 - Follower 제거 전 작업 데이터 받기
###Code
df_pos = pd.read_csv(r'C:\Users\Gk\Documents\dev\data\LinearRegression_Football_data\df_pos.csv', encoding='utf-8-sig', index_col=0)
###Output
_____no_output_____
###Markdown
Position Rounding
###Code
df_pos.position = df_pos.position.round()
df_pos.position.unique()
df_atk = df_pos[df_pos.position == 4].append(df_pos[df_pos.position == 2])
df_atk.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
상관관계 확인
###Code
# df_atk.corr()[df_atk.corr() > 0.7].to_csv('df_atk_corr.csv', encoding='utf-8-sig')
df_atk.corr()[df_atk.corr() > 0.6]
pd.read_csv("df_atk_corrc.csv", encoding='utf-8', index_col=0)
###Output
_____no_output_____
###Markdown
적합한 n_component 확인
###Code
df_for_pca = df_atk[['position', 'shots_total', 'shots_on', 'goals_total', 'goals_conceded', 'goals_assists', 'passes_key', \
'tackles_total', 'tackles_blocks', 'tackles_interceptions', 'duels_total', 'duels_won', 'dribbles_attempts', \
'dribbles_success', 'penalty_saved', 'games_appearences', 'substitutes_in', 'substitutes_bench']]
len(df_for_pca.columns)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
data_rescaled = scaler.fit_transform(df_for_pca)
from sklearn.decomposition import PCA
pca = PCA().fit(data_rescaled)
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (12,6)
fig, ax = plt.subplots()
xi = np.arange(1, 19, step=1)
y = np.cumsum(pca.explained_variance_ratio_)
plt.ylim(0.0,1.1)
plt.plot(xi, y, marker='o', linestyle='--', color='b')
plt.xlabel('Number of Components')
plt.xticks(np.arange(0, 19, step=1)) #change from 0-based array index to 1-based human-readable label
plt.ylabel('Cumulative variance (%)')
plt.title('The number of components needed to explain variance')
plt.axhline(y=0.95, color='r', linestyle='-')
plt.text(0.5, 0.85, '95% cut-off threshold', color = 'red', fontsize=16)
ax.grid(axis='x')
plt.show()
###Output
_____no_output_____
###Markdown
위 결과를 토대로 1차 PCA - 전체 데이터에서 8개 주성분 추출
###Code
data = PCA(n_components=8).fit_transform(df_for_pca)
data
df_pca_1 = pd.DataFrame(data, columns = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
df_pca_1
df_pca_1.corr()[df_pca_1.corr()>0.7]
pca_cols = list(df_for_pca.columns)
npca_cols = df_atk.columns.tolist()
npca_features = [item for item in npca_cols if item not in pca_cols]
npca_features
len(npca_features)
df_ols = pd.concat([df_atk[npca_features].reset_index(drop=True), df_pca_1.reset_index(drop=True)], axis=1)
df_ols = df_ols.drop('player_name', axis=1)
df_ols
"value ~ scale(age) + \
scale(height) + \
scale(weight) + \
scale(rating) + \
scale(follower) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(cards_yellowred) + \
scale(cards_red) + \
scale(penalty_won) + \
scale(penalty_commited) + \
scale(penalty_success) + \
scale(penalty_missed) + \
scale(games_lineups) + \
scale(substitutes_out) + \
scale(games_played) + \
scale(a) + \
scale(b) + \
scale(c) + \
scale(d) + \
scale(e) + \
scale(f) + \
scale(g) + \
scale(h)"
###Output
_____no_output_____
###Markdown
1차 PCA OLS - result.pvalues 기준 feature 제거
###Code
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_accuracy) + \
scale(penalty_won) + \
scale(games_played) + \
scale(b) + \
scale(e)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
pred = result.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
print("------------------------------------------------------------------")
print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
result.pvalues.sort_values(ascending=False)
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_accuracy) + \
scale(penalty_won) + \
scale(games_played) + \
scale(b) + \
scale(follower)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
pred = result.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
print("------------------------------------------------------------------")
print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
result.pvalues.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
result에 아래 검증 중 예측된 모델의 P값 출력으로 인한 오류 - 1차 OLS는 오류 2차 PCA OLS - summary안의 p값 기준 feature 제거
###Code
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
# df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(penalty_won) + \
scale(games_played) + \
scale(b) + \
scale(e)"
model = sm.OLS.from_formula(formula, data=df_ols)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result_p = model.fit()
pred = result_p.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result_p.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result_p.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
result.pvalues.sort_values(ascending=False)
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
# df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(penalty_won) + \
scale(games_played) + \
scale(b) + \
scale(e) + \
scale(follower)"
model = sm.OLS.from_formula(formula, data=df_ols)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result_p = model.fit()
pred = result_p.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result_p.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result_p.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
###Output
OLS Regression Results
==============================================================================
Dep. Variable: value R-squared: 0.433
Model: OLS Adj. R-squared: 0.415
Method: Least Squares F-statistic: 23.30
Date: Mon, 13 Jul 2020 Prob (F-statistic): 2.26e-26
Time: 23:12:28 Log-Likelihood: -1089.5
No. Observations: 253 AIC: 2197.
Df Residuals: 244 BIC: 2229.
Df Model: 8
Covariance Type: nonrobust
==========================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------
Intercept 36.6996 1.149 31.943 0.000 34.437 38.963
scale(age) -6.4338 1.353 -4.755 0.000 -9.099 -3.769
scale(passes_total) 2.6906 1.576 1.708 0.089 -0.413 5.794
scale(passes_accuracy) 4.7643 1.489 3.199 0.002 1.831 7.698
scale(penalty_won) 4.3120 1.314 3.281 0.001 1.723 6.901
scale(games_played) 11.6297 1.485 7.830 0.000 8.704 14.555
scale(b) 5.2556 1.479 3.554 0.000 2.343 8.168
scale(e) 1.8625 1.252 1.487 0.138 -0.604 4.329
scale(follower) 6.8514 1.352 5.069 0.000 4.189 9.514
==============================================================================
Omnibus: 84.420 Durbin-Watson: 1.986
Prob(Omnibus): 0.000 Jarque-Bera (JB): 333.365
Skew: 1.336 Prob(JB): 4.08e-73
Kurtosis: 7.948 Cond. No. 2.70
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
학습 R2 = 0.46575156, 검증 R2 = 0.14268108
학습 R2 = 0.41065787, 검증 R2 = 0.60210122
학습 R2 = 0.43701074, 검증 R2 = 0.29762895
학습 R2 = 0.44088357, 검증 R2 = 0.04997222
학습 R2 = 0.37300284, 검증 R2 = 0.51245576
학습 R2 = 0.40107745, 검증 R2 = 0.57565769
학습 R2 = 0.46893711, 검증 R2 = -0.37128902
학습 R2 = 0.44948540, 검증 R2 = 0.05901814
학습 R2 = 0.45928319, 검증 R2 = 0.16517133
학습 R2 = 0.45708529, 검증 R2 = -0.27144010
모델 성능 : 0.17619572541827272
###Markdown
###Code
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(height) + \
scale(passes_accuracy) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(a) + \
scale(b) + \
scale(e)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
pred = result.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
# print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_accuracy) + \
scale(penalty_won) + \
scale(games_played) + \
scale(b) + \
scale(follower)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result_p = model.fit()
pred = result_p.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result_p.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
# print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result_p.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result_p.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
result.pvalues.sort_values(ascending=False)
model_full = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(height) + \
scale(passes_accuracy) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(a) + \
scale(b) + \
scale(e) + \
scale(follower)", data=df_ols)
model_reduced = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(height) + \
scale(passes_accuracy) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(a) + \
scale(b) + \
scale(e)", data=df_ols)
sm.stats.anova_lm(model_reduced.fit(), model_full.fit())
model_full = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(height) + \
scale(passes_accuracy) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(a) + \
scale(b) + \
scale(e) + \
scale(follower)", data=df_ols)
result = model_full.fit()
sm.stats.anova_lm(result, typ=2)
###Output
_____no_output_____
###Markdown
3차 PCA - corr 기준 domain base 주성분 추출
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.preprocessing import MinMaxScaler
df_for_pca
###Output
_____no_output_____
###Markdown
높은 상관관계를 보이는 feature들1. position, goals_total2. shots_total, shots_on, goals_total3. goals_conceded, penalty_saved4. goals_assists, passes_key5. tackles_total, tackles_blocks, tackles_interceptions6. duels_total, duels_won7. dribbles_attempts, dribbles_success8. games_appearences, substitutes_in, substitutes_bench
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
data_rescaled = pd.DataFrame(scaler.fit_transform(df_for_pca))
# # 1. position, goals_total
df_to_pca = df_for_pca[['position', 'goals_total']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_pg = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['position_goalsTotal'])
df_pg
# 1_2. shots_total, shots_on, goals_total
df_to_pca = df_for_pca[['position', 'shots_total', 'shots_on', 'goals_total']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_psg = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['position_shotsOnTotal_goalsTotal'])
df_psg
# # 2. shots_total, shots_on, goals_total
df_to_pca = df_for_pca[['shots_total', 'shots_on', 'goals_total']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_sg = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['shotsOnTotal_goalsTotal'])
df_sg
# 3. goals_conceded, penalty_saved
df_to_pca = df_for_pca[['goals_conceded', 'penalty_saved']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_gpe = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['goalsConceded_penaltySaved'])
df_gpe
# 4. goals_assists, passes_key
df_to_pca = df_for_pca[['goals_assists', 'passes_key']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_gpa = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['goalsAssists_passesKey'])
df_gpa
# 5. tackles_total, tackles_blocks, tackles_interceptions
df_to_pca = df_for_pca[['tackles_total', 'tackles_blocks', 'tackles_interceptions']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_t = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['tackles'])
df_t
# 6. duels_total, duels_won
df_to_pca = df_for_pca[['duels_total', 'duels_won']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_du = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['duels'])
df_du
# 7. dribbles_attempts, dribbles_success
df_to_pca = df_for_pca[['dribbles_attempts', 'dribbles_success']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_dr = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['dribbles'])
df_dr
# 8. games_appearences, substitutes_in, substitutes_bench
df_to_pca = df_for_pca[['games_appearences', 'substitutes_in', 'substitutes_bench']]
data_rescaled = MinMaxScaler().fit_transform(df_to_pca)
df_gs = pd.DataFrame(data = PCA(n_components=1).fit_transform(data_rescaled), columns=['gamesAppearences_substitutes'])
df_gs
df_pca_2 = pd.concat([df_pg, df_sg, df_gpe, df_gpa, df_t, df_du, df_dr, df_gs], axis=1)
df_pca_2
df_pca_2.corr()[df_pca_2.corr() > 0.7]
###Output
_____no_output_____
###Markdown
PCA를 통해 주성분 추출을 했으나, goals_total의 중복 추출로 높은 상관관계 확인 일단 OLS 진행 후, 추후에 다시 한번 둘을 한 component로 만들어 OLS 재진행
###Code
df_pca_3 = pd.concat([df_psg, df_gpe, df_gpa, df_t, df_du, df_dr, df_gs], axis=1)
df_pca_3
df_pca_3.corr()[df_pca_3.corr() > 0.7]
pca_cols = list(df_for_pca.columns)
npca_cols = df_pos.columns.tolist()
npca_features = [item for item in npca_cols if item not in pca_cols]
len(pca_cols), len(npca_cols), len(npca_features)
df_ols = pd.concat([df_atk[npca_features].reset_index(drop=True), df_pca_3.reset_index(drop=True)], axis=1)
df_ols = df_ols.drop('player_name', axis=1)
df_ols
df_ols.columns
"value ~ scale(age) + \
scale(height) + \
scale(weight) + \
scale(rating) + \
scale(follower) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(cards_yellowred) + \
scale(cards_red) + \
scale(penalty_won) + \
scale(penalty_commited) + \
scale(penalty_success) + \
scale(penalty_missed) + \
scale(games_lineups) + \
scale(substitutes_out) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsConceded_penaltySaved) + \
scale(goalsAssists_passesKey) + \
scale(tackles) + \
scale(duels) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes)"
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
# df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsAssists_passesKey) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes)"
model = sm.OLS.from_formula(formula, data=df_ols)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result_p = model.fit()
pred = result_p.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result_p.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result_p.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result_p.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
model5 = [[-0.05065965, 0.56461972, -0.21390691, -0.38217239, 0.39463493, 0.40347619, 0.54433083, -0.04739948, 0.24228798, 0.02932993],
[0.00849597, 0.63221316, 0.01726316, -0.16063964, 0.52578786, 0.57285031, -0.39533457, 0.01245289, 0.26820445, 0.14063162]]
result.pvalues.sort_values(ascending=False)
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsAssists_passesKey) + \
scale(gamesAppearences_substitutes) + \
scale(follower)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result_p = model.fit()
pred = result_p.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result_p.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result_p.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result_p.mse_total, mse))
# print("------------------------------------------------------------------")
# print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
print("모델 성능 : {}".format(scores_rm[0].mean()))
result.pvalues.sort_values(ascending=False)
model_full = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsAssists_passesKey) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes) + \
scale(follower)", data=df_ols)
model_reduced = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsAssists_passesKey) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes)", data=df_ols)
sm.stats.anova_lm(model_reduced.fit(), model_full.fit())
model_full = sm.OLS.from_formula(
"value ~ scale(age) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(fouls_committed) + \
scale(cards_yellow) + \
scale(penalty_won) + \
scale(games_played) + \
scale(position_shotsOnTotal_goalsTotal) + \
scale(goalsAssists_passesKey) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes) + \
scale(follower)", data=df_ols)
result = model_full.fit()
sm.stats.anova_lm(result, typ=2)
###Output
_____no_output_____
###Markdown
위에 PCA 1차와 같이 예측된 p값으로 OLS 진행
###Code
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(penalty_won) + \
scale(games_played) + \
scale(shotsOnTotal_goalsTotal) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
pred = result.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
# print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
# print("------------------------------------------------------------------")
print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
result.pvalues.sort_values(ascending=False)
from sklearn.model_selection import train_test_split
dfX = df_ols.drop(['value'], axis=1)
dfy = df_ols['value']
df = pd.concat([dfX, dfy], axis=1)
df_train, df_test = train_test_split(df, test_size=0.3, random_state=0)
formula = "value ~ scale(age) + \
scale(passes_total) + \
scale(passes_accuracy) + \
scale(fouls_drawn) + \
scale(penalty_won) + \
scale(games_played) + \
scale(shotsOnTotal_goalsTotal) + \
scale(dribbles) + \
scale(gamesAppearences_substitutes) + \
scale(follower)"
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
print(result.summary())
##############################################################################
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
z = 10
scores_rm = np.zeros([2, z])
cv = KFold(z, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df_ols)):
df_train = df_ols.iloc[idx_train]
df_test = df_ols.iloc[idx_test]
model = sm.OLS.from_formula(formula, data=df_train)
result = model.fit()
pred = result.predict(df_test)
rsquared = r2_score(df_test.value, pred)
mse = mean_squared_error(df_test.value, pred)
# pred = result.predict(df_test)
# rss = ((df_test.value - pred) ** 2).sum()
# tss = ((df_test.value - df_test.value.mean())** 2).sum()
# rsquared = 1 - rss / tss
scores_rm[0, i] = rsquared
scores_rm[1, i] = mse
# print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
# print("학습 mse = {:.8f}, 검증 R2 = {:.8f}".format(result.mse_total, mse))
# print("------------------------------------------------------------------")
print("모델 성능 : {}, 모델 mse : {}".format(scores_rm[0].mean(), scores_rm[1].mean()))
###Output
OLS Regression Results
==============================================================================
Dep. Variable: value R-squared: 0.478
Model: OLS Adj. R-squared: 0.447
Method: Least Squares F-statistic: 15.23
Date: Thu, 09 Jul 2020 Prob (F-statistic): 4.22e-19
Time: 19:44:29 Log-Likelihood: -759.78
No. Observations: 177 AIC: 1542.
Df Residuals: 166 BIC: 1576.
Df Model: 10
Covariance Type: nonrobust
=======================================================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------------------------------
Intercept 37.1497 1.374 27.042 0.000 34.437 39.862
scale(age) -7.5506 1.767 -4.273 0.000 -11.040 -4.062
scale(passes_total) 4.4843 2.215 2.025 0.045 0.111 8.857
scale(passes_accuracy) 5.7630 2.346 2.457 0.015 1.132 10.394
scale(fouls_drawn) -4.1439 1.701 -2.436 0.016 -7.502 -0.785
scale(penalty_won) 5.3687 1.733 3.098 0.002 1.947 8.791
scale(games_played) 9.1460 2.106 4.343 0.000 4.988 13.304
scale(shotsOnTotal_goalsTotal) 6.9837 2.297 3.040 0.003 2.448 11.520
scale(dribbles) 3.8450 1.766 2.177 0.031 0.357 7.333
scale(gamesAppearences_substitutes) -4.6006 2.406 -1.912 0.058 -9.351 0.150
scale(follower) 6.3238 1.744 3.627 0.000 2.881 9.767
==============================================================================
Omnibus: 65.831 Durbin-Watson: 2.131
Prob(Omnibus): 0.000 Jarque-Bera (JB): 266.263
Skew: 1.383 Prob(JB): 1.52e-58
Kurtosis: 8.334 Cond. No. 3.91
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
모델 성능 : 0.21910097970811243, 모델 mse : 374.33224076222535
|
figures/supplement/sfig6.ipynb | ###Markdown
Example manual analysis
###Code
srcsub = mre.input_handler('/Users/harangju/Developer/activity_i5_j2.txt')
print('imported trials from wildcard: ', srcsub.shape[0])
oful = mre.OutputHandler()
oful.add_ts(srcsub)
rk = mre.coefficients(srcsub)
print(rk.coefficients)
print('this guy has the following attributes: ', rk._fields)
m = mre.fit(rk)
m.mre
###Output
INFO Unbound fit to $|A| e^{-k/\tau}$
INFO Finished 4 fit(s)
INFO Finished fitting the data to f_exponential, mre = 0.70596, tau = 2.87ms, ssres = 0.15697
WARNING The obtained autocorrelationtime is small compared to the fitrange: tmin~1ms, tmax~2904ms, tau~3ms
WARNING Consider fitting with smaller 'minstep' and 'maxstep'
###Markdown
Full analysis
###Code
# m = np.zeros((16,12))
mact = np.zeros((16,12))
for i in range(0,16):
for j in range(0,12):
fname = '/Users/harangju/Developer/avalanche paper data/mr estimation/activity/activity_i' +\
str(i+1) + '_j' + str(j+1) + '.txt'
act = np.loadtxt(fname)
mact[i,j] = max(act)
# srcsub = mre.input_handler(fname)
# rk = mre.coefficients(srcsub, steps=(1,10000))
# me = mre.fit(rk)
# m[i,j] = me.mre
import scipy.io as sio
sio.savemat('/Users/harangju/Developer/mre.mat',{'mre':m,'mact':mact})
###Output
_____no_output_____ |
Image Classification using scikit-learn.ipynb | ###Markdown
Image Classification using `sklearn.svm`
###Code
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
###Output
_____no_output_____
###Markdown
Load images in structured directory like it's sklearn sample dataset
###Code
def load_image_files(container_path, dimension=(400, 300)):
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = plt.imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
print(i, direc, img.shape, img_resized.shape)
flat_data.append(img_resized.flatten())
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
return [flat_data, target]
# Bunch(data=flat_data,
# target=target,
# target_names=categories,
# images=images,
# DESCR=descr)
image_dataset = load_image_files("images/")
###Output
0 images/bulbasaur (500, 359, 3) (400, 300, 3)
0 images/bulbasaur (500, 359, 3) (400, 300, 3)
0 images/bulbasaur (500, 359, 3) (400, 300, 3)
1 images/charmander (500, 359, 3) (400, 300, 3)
1 images/charmander (500, 359, 3) (400, 300, 3)
1 images/charmander (500, 359, 3) (400, 300, 3)
2 images/squirtle (450, 324, 3) (400, 300, 3)
2 images/squirtle (450, 324, 3) (400, 300, 3)
2 images/squirtle (450, 324, 3) (400, 300, 3)
###Markdown
Split data
###Code
# X_train, X_test, y_train, y_test = train_test_split(
# image_dataset[0], image_dataset[1], test_size=0.3,random_state=109)
X_train, y_train = image_dataset[0], image_dataset[1]
###Output
_____no_output_____
###Markdown
Train data with parameter optimization
###Code
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
print("Training svm")
svc = svm.SVC()
print("Getting optimal param")
#clf = svm.SVC(gamma='auto')
clf = GridSearchCV(svc, param_grid,cv=3)
print("fitting model")
clf.fit(X_train, y_train)
###Output
Training svm
Getting optimal param
fitting model
###Markdown
Predict
###Code
import skimage
pred_0 = resize(np.load("squirtle_hold_short.npy")[0,:,:,:], (400,300), anti_aliasing=True, mode='reflect').flatten()
pred_2 = resize(np.load("bulbasaur_hold_short1.npy")[0,:,:,:], (400,300), anti_aliasing=True, mode='reflect').flatten()
X_test = np.array([pred_1, pred_2])
y_pred = clf.predict(X_train)
# y_pred_2 = clf.predict(pred_2)
print(y_pred)
# 1 - charmander
# 0 - squirtle
# 2 - bulbasaur
###Output
[0 0 0 1 1 1 2 2 2]
###Markdown
Report
###Code
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_train, y_pred)))
print("Confusion matrix")
conf = metrics.confusion_matrix(y_train, y_pred)
acc = np.diag(conf).sum()/conf.sum()
conf, acc
test_dataset = load_image_files("test/")
X_rot_test, y_rot_test = test_dataset[0], test_dataset[1]
print("loaded")
y_rot_pred = clf.predict(X_rot_test)
print(y_rot_pred)
print("Confusion matrix")
conf_rot = metrics.confusion_matrix(y_rot_test, y_rot_pred)
acc_rot = np.diag(conf_rot).sum()/conf_rot.sum()
conf_rot, acc_rot
X_rot_test, y_rot_test = test_dataset[0], test_dataset[1]
print("loaded")
y_rot_pred = clf.predict(X_rot_test)
print("Confusion matrix")
conf_rot = metrics.confusion_matrix(y_rot_test, y_rot_pred)
acc_rot = np.diag(conf_rot).sum()/conf_rot.sum()
conf_rot, acc_rot
###Output
loaded
Confusion matrix
###Markdown
Image Classification using `sklearn.svm`
###Code
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
###Output
_____no_output_____
###Markdown
Load images in structured directory like it's sklearn sample dataset
###Code
def load_image_files(container_path, dimension=(64, 64)):
"""
Load image files with categories as subfolder names
which performs like scikit-learn sample dataset
Parameters
----------
container_path : string or unicode
Path to the main folder holding one subfolder per category
dimension : tuple
size to which image are adjusted to
Returns
-------
Bunch
"""
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = skimage.io.imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("images/")
###Output
_____no_output_____
###Markdown
Split data
###Code
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
###Output
_____no_output_____
###Markdown
Train data with parameter optimization
###Code
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
svc = svm.SVC()
clf = GridSearchCV(svc, param_grid)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict
###Code
y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Report
###Code
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
Image Classification using `sklearn.svm`
###Code
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
###Output
_____no_output_____
###Markdown
Load images in structured directory like it's sklearn sample dataset
###Code
def load_image_files(container_path, dimension=(64, 64)):
"""
Load image files with categories as subfolder names
which performs like scikit-learn sample dataset
Parameters
----------
container_path : string or unicode
Path to the main folder holding one subfolder per category
dimension : tuple
size to which image are adjusted to
Returns
-------
Bunch
"""
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("images/")
###Output
_____no_output_____
###Markdown
Split data
###Code
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
###Output
_____no_output_____
###Markdown
Train data with parameter optimization
###Code
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
svc = svm.SVC()
clf = GridSearchCV(svc, param_grid)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict
###Code
y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Report
###Code
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_test, y_pred)))
###Output
Classification report for -
GridSearchCV(estimator=SVC(),
param_grid=[{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001],
'kernel': ['rbf']}]):
precision recall f1-score support
0 0.86 0.75 0.80 16
1 0.73 0.58 0.65 19
2 0.70 0.89 0.78 18
3 0.79 0.96 0.87 24
4 0.75 0.56 0.64 16
accuracy 0.76 93
macro avg 0.77 0.75 0.75 93
weighted avg 0.77 0.76 0.76 93
###Markdown
Image Classification using `sklearn.svm`
###Code
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
###Output
_____no_output_____
###Markdown
Load images in structured directory like it's sklearn sample dataset
###Code
def load_image_files(container_path, dimension=(64, 64)):
"""
Load image files with categories as subfolder names
which performs like scikit-learn sample dataset
Parameters
----------
container_path : string or unicode
Path to the main folder holding one subfolder per category
dimension : tuple
size to which image are adjusted to
Returns
-------
Bunch
"""
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("images/")
###Output
_____no_output_____
###Markdown
Split data
###Code
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
###Output
_____no_output_____
###Markdown
Train data with parameter optimization
###Code
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
svc = svm.SVC()
clf = GridSearchCV(svc, param_grid)
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict
###Code
y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Report
###Code
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
Image Classification with SVM, MLP, KNN SVM Reference: https://github.com/whimian/SVM-Image-Classification
###Code
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import svm, metrics, datasets
from sklearn.utils import Bunch
from sklearn.model_selection import GridSearchCV, train_test_split
from skimage.io import imread
from skimage.transform import resize
###Output
_____no_output_____
###Markdown
Load images in structured directory like it's sklearn sample dataset
###Code
def load_image_files(container_path, dimension=(64, 64)):
"""
Load image files with categories as subfolder names
which performs like scikit-learn sample dataset
Parameters
----------
container_path : string or unicode
Path to the main folder holding one subfolder per category
dimension : tuple
size to which image are adjusted to
Returns
-------
Bunch
"""
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("dataset/")
###Output
_____no_output_____
###Markdown
Split data
###Code
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
###Output
_____no_output_____
###Markdown
Train data with parameter optimization
###Code
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
svc = svm.SVC()
clf = GridSearchCV(svc, param_grid)
clf.fit(X_train, y_train)
# Best paramete set
print('Best parameters found:\n', clf.best_params_)
print('\n')
# All results
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
print('best score: %0.3f' %clf.best_score_)
###Output
best score: 0.679
###Markdown
Predict
###Code
y_pred = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Report
###Code
print("Classification report for - \n{}:\n{}\n".format(
clf, metrics.classification_report(y_test, y_pred)))
# 預測成功的比例
print('訓練集: ',clf.score(X_train,y_train))
print('測試集: ',clf.score(X_test,y_test))
###Output
訓練集: 0.8949640287769784
測試集: 0.6387959866220736
###Markdown
MLP Reference: https://www.kaggle.com/pandaqc/mlp-with-sickitlearn
###Code
# imports
# data analysis
import pandas as pd
import numpy as np
from scipy import stats, integrate
# machine learning
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import OneHotEncoder
from sklearn.utils import Bunch
from skimage.io import imread
from skimage.transform import resize
# data visualization
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from pathlib import Path
%matplotlib notebook
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def load_image_files(container_path, dimension=(64, 64)):
image_dir = Path(container_path)
folders = [directory for directory in image_dir.iterdir() if directory.is_dir()]
categories = [fo.name for fo in folders]
descr = "A image classification dataset"
images = []
flat_data = []
target = []
for i, direc in enumerate(folders):
for file in direc.iterdir():
img = imread(file)
img_resized = resize(img, dimension, anti_aliasing=True, mode='reflect')
flat_data.append(img_resized.flatten())
images.append(img_resized)
target.append(i)
flat_data = np.array(flat_data)
target = np.array(target)
images = np.array(images)
return Bunch(data=flat_data,
target=target,
target_names=categories,
images=images,
DESCR=descr)
image_dataset = load_image_files("dataset/")
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
print('X_train shape:', X_train.shape, 'y_train shape:', y_train.shape)
print('X_val shape:', X_test.shape, 'y_val shape:', y_test.shape)
# instanciate the MLP classifier
mlp = MLPClassifier(max_iter=100,verbose=True)
parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam','lbfgs'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3)
# fit the model with the training set
clf.fit(X_train, y_train)
# Best paramete set
print('Best parameters found:\n', clf.best_params_)
print('\n')
# All results
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
# run prediction on the validation set
y_pred = clf.predict(X_test)
print('Classification report:\n\n', classification_report(y_test, y_pred), '\n')
print('------------------------------\n')
print('Confusion matrix:\n\n', confusion_matrix(y_test, y_pred))
# instanciate the MLP classifier
mlp = MLPClassifier(max_iter=100,verbose=True,activation="relu",alpha=0.0001,hidden_layer_sizes=(100,),learning_rate="constant",solver="sgd")
parameter_space = {}
# fit the model with the training set
mlp.fit(X_train, y_train)
# 預測成功的比例
print('訓練集: ',mlp.score(X_train,y_train))
print('測試集: ',mlp.score(X_test,y_test))
###Output
訓練集: 0.8460431654676259
測試集: 0.6287625418060201
###Markdown
KNN
###Code
X_train, X_test, y_train, y_test = train_test_split(
image_dataset.data, image_dataset.target, test_size=0.3,random_state=109)
from sklearn.neighbors import KNeighborsClassifier
# 建立KNN模型
knnModel = KNeighborsClassifier(n_jobs=-1)
k_range = list(range(1,10))
weight_options = ['uniform','distance']
param_gridknn = dict(n_neighbors = k_range,
weights = weight_options)
gridKNN = GridSearchCV(knnModel,
param_gridknn,
cv=5,
scoring='accuracy',
verbose=1,
n_jobs=-1)
gridKNN.fit(X_train,y_train)
print('best score is:',str(gridKNN.best_score_))
print('best params are:',str(gridKNN.best_params_))
# 預測成功的比例
print('訓練集: ',gridKNN.score(X_train,y_train))
print('測試集: ',gridKNN.score(X_test,y_test))
print(classification_report(y_test,y_pred))
###Output
precision recall f1-score support
0 0.64 0.76 0.69 161
1 0.64 0.50 0.56 138
micro avg 0.64 0.64 0.64 299
macro avg 0.64 0.63 0.63 299
weighted avg 0.64 0.64 0.63 299
|
notebooks/CoCo_SN2007uy_Test.ipynb | ###Markdown
Load in an templates object
###Code
# snname = "SN2007uy"
snname = "SN2013ge"
sn = pcc.classes.SNClass(snname)
phot_path = os.path.join(coco_root_path, "data/lc/", snname + ".dat")
speclist_path = os.path.join(str(coco_root_path),"lists/" + snname + ".list")
recon_filename = os.path.abspath(os.path.join(str(coco_root_path), "recon/", snname + ".dat"))
print(phot_path)
sn.load_phot(path = phot_path)
# sn.phot.plot()
sn.get_lcfit(recon_filename)
sn.load_list(path = speclist_path)
sn.load_spec()
# sn.load_mangledspec()
# sn.plot_spec()
# sn.plot_mangledspec()
# sn.plot_lc(multiplot = False, mark_spectra=True, savepng=True, outpath = "/Users/berto/projects/LSST/SN2007uy")
sn.plot_lc(multiplot = False, mark_spectra=True)
# sn.plot_lc(multiplot = True, lock_axis=True)
sn.load_mangledspec()
# sn.plot_spec()
# sn.plot_mangledspec()
# for i in zip(sn.spec, sn.mangledspec):
# print(i)
# pcc.functions.compare_spec(sn.spec[i[0]], sn.mangledspec[i[1]], normalise=True)
# pcc.plot_mangle(sn.spec["2009jf_-7.64.txt"], sn.mangledspec["SN2009jf_55114.060000.spec"])
from scipy.integrate import simps
def calc_spectrum_filter_flux(filter_name, SpecClass):
filter_object = pcc.functions.load_filter("/Users/berto/Code/CoCo/data/filters/" + filter_name + ".dat")
filter_object.resample_response(new_wavelength = SpecClass.wavelength)
filter_area = simps(filter_object.throughput, filter_object.wavelength)
transmitted_spec = filter_object.throughput * SpecClass.flux
integrated_flux = simps(transmitted_spec, SpecClass.wavelength)
return integrated_flux/filter_area
def calc_specphot(sn, filtername):
specphot = np.array([])
specepoch = np.array([])
for spec in sn.mangledspec:
specphot = np.append(specphot, calc_spectrum_filter_flux(filtername, sn.mangledspec[spec]))
specepoch = np.append(specepoch, sn.mangledspec[spec].mjd_obs)
return specepoch, specphot
def compare_phot_specphot(sn, filtername):
""""""
specepoch, specphot = calc_specphot(sn, filtername)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(specepoch, specphot, label = "specphot")
ax.scatter(sn.phot.data[filtername]["MJD"], sn.phot.data[filtername]["flux"], label = filtername)
ax.set_ylim(0, 1.05 * np.nanmax(np.append(sn.phot.data[filtername]["flux"], specphot)))
ax.legend()
# plt.show()
# compare_phot_specphot(sn, "BessellB")
# compare_phot_specphot(sn, "BessellV")
# compare_phot_specphot(sn, "SDSS_r")
# compare_phot_specphot(sn, "SDSS_i")
###Output
_____no_output_____
###Markdown
inputs:* **`snname`*** **`redshift`*** **`absmag offset`*** **`EBV MW`*** **`EBV Host`*** **`Rv`*** **`MJD at Peak`*** **`MJD to simulate`*** **`filters to simulate`**
###Code
sn.lcfit.get_fit_splines()
###Output
_____no_output_____
###Markdown
Quick check that the fit spline fits the fit (in Bessell V).Note: spline sampled at MJDOBS so looks slightly linear.
###Code
# plt.plot(sn.phot.data["BessellV"]["MJD"], sn.lcfit.spline["BessellV"](sn.phot.data["BessellV"]["MJD"]), label = r"$\textnormal{Spline}$")
# plt.scatter(sn.phot.data["BessellV"]["MJD"], sn.phot.data["BessellV"]["flux"], label = r"$\textnormal{Photometry}$")
# plt.plot(sn.lcfit.data["BessellV"]["MJD"], sn.lcfit.data["BessellV"]["flux"], label = r"$\textnormal{Fit}$")
# plt.legend()
mjdmax = get_mjdmax_BessellV(sn)[0]
filters_to_sim = convert_column_string_encoding(sn.phot.phot["filter"]).data
mjd_to_sim = sn.phot.phot["MJD"].data
verbose = False
# verbose = True
for i, f in enumerate(filters_to_sim):
filters_to_sim[i] = f.replace(b"SDSS", b"LSST").replace(b"BessellV", b"LSST_g")
# filters_to_sim[i] = pcc.utils.b(str(f).replace("BessellV", "LSST_g").replace("SDSS_r", "LSST_r"))
if verbose:
print(mjdmax)
print(mjd_to_sim)
print(filters_to_sim)
# tablepath = "/Users/berto/Code/verbose-enigma/testdata/info/info.dat"
# info = Table.read(tablepath, format = "ascii.commented_header")
info = pcc.functions.load_info()
z_obs = info.get_sn_info("SN2007uy")["z_obs"]
m = sfdmap.SFDMap()
print(z_obs)
reload(pccsims)
coco = pccsims.pyCoCo(pcc.utils.b(filter_path), pcc.utils.b(coco_root_path))
# flux, flux_err = coco.simulate(b"SN2009jf",
# 0.008, 0.0, 0.0, 0.0, 3.1,
# mjdmax, mjd_to_sim,
# filters_to_sim)
flux, flux_err = coco.simulate(b"SN2007uy",
z_obs, 0.0, 0.0, 0.0, 3.1,
mjdmax, mjd_to_sim,
filters_to_sim)
# flux, flux_err = coco.simulate(b"SN2009jf",
# 0.008, 0.0, 0.1, 0.1, 3.1,
# mjdmax, mjd_to_sim,
# filters_to_sim)
coco.get_fit_params()
specphot = coco.spec_photometry(b"SN2007uy",
z_obs, b"LSST_g")
# plt.scatter(specphot[0], specphot[1])
# plt.ylim(0, 1.02 *np.nanmax(specphot[1]))
p = pcc.classes.PhotometryClass()
p.load_table(pcc.utils.simulate_out_to_ap_table(mjd_to_sim, flux, flux_err, filters_to_sim))
# plt.scatter(p.data["BessellV"]["MJD"], p.data["BessellV"]["flux"], label = "Synthetic Bessell V")
plt.scatter(p.data["LSST_g"]["MJD"], p.data["LSST_g"]["flux"], label = "Synthetic LSST g")
plt.scatter(sn.phot.data["BessellV"]["MJD"], sn.phot.data["BessellV"]["flux"], label = "Real Bessell V")
plt.scatter(specphot[0] + mjdmax, specphot[1])
plt.ylim(0, 1.02 *np.nanmax(np.append(p.data["LSST_g"]["flux"], sn.phot.data["BessellB"]["flux"])))
plt.legend()
# p.plot()
# p.save(filename = "SN2007uy_sim_LSST.dat", path = "/Users/berto/projects/LSST/cadence/")
sn_fake = pcc.classes.SNClass("SN2007uy_sim")
sn_fake.load_phot(path = "/Users/berto/projects/LSST/cadence/SN2007uy_sim_LSST.dat")
sn_fake.plot_lc(multiplot = False)
from matplotlib.ticker import MultipleLocator
# filters = ["BessellV"]
filters = ["SDSS_r"]
alpha = 1.0
xminorticks = 10
pcc.utils.setup_plot_defaults()
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.1, bottom = 0.13, top = 0.93,
right = 0.91, hspace=0, wspace = 0)
## Label the axes
xaxis_label_string = r'$\textnormal{Time, MJD (days)}$'
yaxis_label_string = r'$\textnormal{Flux, erg s}^{-1}\textnormal{\AA}^{-1}\textnormal{cm}^{-2}$'
ax1 = fig.add_subplot(111)
axes_list = [ax1]
for filter_key in filters:
plot_label_string = r'$\rm{' + sn.phot.data_filters[filter_key].filter_name.replace('_', '\\_') + '}$'
plot_label_string_fake = r'$\rm{' + sn_fake.phot.data_filters[filter_key].filter_name.replace('_', '\\_') + ', simulated}$'
ax1.errorbar(sn.phot.data[filter_key]['MJD'], sn.phot.data[filter_key]['flux'],
yerr = sn.phot.data[filter_key]['flux_err'],
capsize = 0, fmt = 'x', color = sn.phot.data_filters[filter_key]._plot_colour,
label = plot_label_string, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha)
ax1.fill_between(sn.lcfit.data[filter_key]['MJD'], sn.lcfit.data[filter_key]['flux_upper'], sn.lcfit.data[filter_key]['flux_lower'],
color = pcc.hex["batman"],
alpha = 0.8, zorder = 0)
ax1.errorbar(sn_fake.phot.data[filter_key]['MJD'], sn_fake.phot.data[filter_key]['flux'],
yerr = sn_fake.phot.data[filter_key]['flux_err'],
# capsize = 0, fmt = 'o', color = sn_fake.phot.data_filters[filter_key]._plot_colour,
capsize = 0, fmt = 'o', color = pcc.hex['r'],
label = plot_label_string_fake, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha)
xminorLocator = MultipleLocator(xminorticks)
ax1.spines['top'].set_visible(True)
ax1.xaxis.set_minor_locator(xminorLocator)
plot_legend = ax1.legend(loc = 'upper right', scatterpoints = 1, markerfirst = False,
numpoints = 1, frameon = False, bbox_to_anchor=(1., 1.),
fontsize = 12.)
ax1.set_ylabel(yaxis_label_string)
ax1.set_xlabel(xaxis_label_string)
outpath = "/Users/berto/projects/LSST/cadence/SN2007uy_consistency_check_SDSS_r"
fig.savefig(outpath + ".png", format = 'png', dpi=500)
from matplotlib.ticker import MultipleLocator
# filters = ["BessellV"]
filters = ["LSST_g"]
alpha = 1.0
xminorticks = 10
pcc.utils.setup_plot_defaults()
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.1, bottom = 0.13, top = 0.93,
right = 0.91, hspace=0, wspace = 0)
## Label the axes
xaxis_label_string = r'$\textnormal{Time, MJD (days)}$'
yaxis_label_string = r'$\textnormal{Flux, erg s}^{-1}\textnormal{\AA}^{-1}\textnormal{cm}^{-2}$'
ax1 = fig.add_subplot(111)
axes_list = [ax1]
for filter_key in filters:
plot_label_string = r'$\rm{' + sn.phot.data_filters["BessellV"].filter_name.replace('_', '\\_') + '}$'
plot_label_string_fake = r'$\rm{' + sn_fake.phot.data_filters[filter_key].filter_name.replace('_', '\\_') + ', simulated}$'
ax1.errorbar(sn.phot.data["BessellV"]['MJD'], sn.phot.data["BessellV"]['flux'],
yerr = sn.phot.data["BessellV"]['flux_err'],
capsize = 0, fmt = 'x', color = sn.phot.data_filters["BessellV"]._plot_colour,
label = plot_label_string, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha)
ax1.fill_between(sn.lcfit.data["BessellV"]['MJD'], sn.lcfit.data["BessellV"]['flux_upper'], sn.lcfit.data["BessellV"]['flux_lower'],
color = pcc.hex["batman"],
alpha = 0.8, zorder = 0)
ax1.errorbar(sn_fake.phot.data[filter_key]['MJD'], sn_fake.phot.data[filter_key]['flux'],
yerr = sn_fake.phot.data[filter_key]['flux_err'],
# capsize = 0, fmt = 'o', color = sn_fake.phot.data_filters[filter_key]._plot_colour,
capsize = 0, fmt = 'o', color = pcc.hex['LSST_g'],
label = plot_label_string_fake, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha)
xminorLocator = MultipleLocator(xminorticks)
ax1.spines['top'].set_visible(True)
ax1.xaxis.set_minor_locator(xminorLocator)
plot_legend = ax1.legend(loc = 'upper right', scatterpoints = 1, markerfirst = False,
numpoints = 1, frameon = False, bbox_to_anchor=(1., 1.),
fontsize = 12.)
ax1.set_ylabel(yaxis_label_string)
ax1.set_xlabel(xaxis_label_string)
print(ax1.get_xlim())
outpath = "/Users/berto/projects/LSST/cadence/SN2007uy_consistency_check_BessellV_LSSTg"
# fig.savefig(outpath + ".png", format = 'png', dpi=500)
cadencepath = "/Users/berto/projects/LSST/cadence/LSST_DDF_2786_cadence.dat"
data = Table.read(cadencepath, format = "ascii.commented_header")
w = np.logical_or(data["filter"] == "LSST_g", data["filter"] == "LSST_r")
mjd_to_sim = data[w]["MJD"].data
filters_to_sim = convert_column_string_encoding(data[w]["filter"]).data
# mjd_to_sim
mjd_to_sim = mjd_to_sim - (mjd_to_sim[0] - 54450)
#
flux, flux_err = coco.simulate(b"SN2007uy",
z_obs, 0.0, 0.0, 0.0, 3.1,
mjdmax, mjd_to_sim,
filters_to_sim)
p = pcc.classes.PhotometryClass()
p.load_table(pcc.utils.simulate_out_to_ap_table(mjd_to_sim, flux, flux_err, filters_to_sim))
p.plot()
p.save(filename = "SN2007uy_sim_LSST_gr.dat", path = "/Users/berto/projects/LSST/cadence/")
sn_fake = pcc.classes.SNClass("SN2007uy_sim")
sn_fake.load_phot(path = "/Users/berto/projects/LSST/cadence/SN2007uy_sim_LSST_gr.dat")
sn_fake.plot_lc(multiplot = False)
from matplotlib.ticker import MultipleLocator
filters = ["BessellV", "SDSS_r"]
markers = ["x", "o"]
# filters = ["LSST_g"]
alpha = 1.0
xminorticks = 10
pcc.utils.setup_plot_defaults()
fig = plt.figure(figsize=[8, 4])
fig.subplots_adjust(left = 0.1, bottom = 0.13, top = 0.93,
right = 0.91, hspace=0, wspace = 0)
## Label the axes
xaxis_label_string = r'$\textnormal{Time, MJD (days)}$'
yaxis_label_string = r'$\textnormal{Flux, erg s}^{-1}\textnormal{\AA}^{-1}\textnormal{cm}^{-2}$'
ax1 = fig.add_subplot(111)
axes_list = [ax1]
for j, filter_key in enumerate(filters):
plot_label_string = r'$\rm{' + sn.phot.data_filters[filter_key].filter_name.replace('_', '\\_') + '}$'
ax1.errorbar(sn.phot.data[filter_key]['MJD'], sn.phot.data[filter_key]['flux'],
yerr = sn.phot.data[filter_key]['flux_err'],
capsize = 0, fmt = markers[j], color = "none",
label = plot_label_string, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha,)
ax1.fill_between(sn.lcfit.data[filter_key]['MJD'], sn.lcfit.data[filter_key]['flux_upper'], sn.lcfit.data[filter_key]['flux_lower'],
color = pcc.hex["batman"],
alpha = 0.8, zorder = 0)
fake_filters = ["LSST_g", "LSST_r"]
for j, filter_key in enumerate(fake_filters):
plot_label_string_fake = r'$\rm{' + sn_fake.phot.data_filters[filter_key].filter_name.replace('_', '\\_') + ', simulated}$'
ax1.errorbar(sn_fake.phot.data[filter_key]['MJD'], sn_fake.phot.data[filter_key]['flux'],
yerr = sn_fake.phot.data[filter_key]['flux_err'],
capsize = 0, fmt = 'o', color = sn_fake.phot.data_filters[filter_key]._plot_colour,
# capsize = 0, fmt = 'o', color = pcc.hex['LSST_g'],
label = plot_label_string_fake, ecolor = pcc.hex['batman'], mec = pcc.hex["batman"],
alpha = alpha)
xminorLocator = MultipleLocator(xminorticks)
ax1.spines['top'].set_visible(True)
ax1.xaxis.set_minor_locator(xminorLocator)
plot_legend = ax1.legend(loc = 'upper right', scatterpoints = 1, markerfirst = False,
numpoints = 1, frameon = False, bbox_to_anchor=(1., 1.),
fontsize = 12.)
ax1.set_ylabel(yaxis_label_string)
ax1.set_xlabel(xaxis_label_string)
ax1.set_xlim(ax1.get_xlim()[0], 54643.724999999999 )
outpath = "/Users/berto/projects/LSST/cadence/SN2007uy_cadence_check_LSSTr_LSSTg"
fig.savefig(outpath + ".png", format = 'png', dpi=500)
# flux
# pccsims.__file__
# p.plot(["Bessellv"], legend=True)
sn.plot_lc(["BessellV"], multiplot = False)
plt.scatter(p.data["BessellV"]["MJD"], p.data["BessellV"]["flux"], label = "Synthetic Bessell V")
p.plot(["BessellB"])
sn.plot_lc(multiplot=False)
sn.load_mangledspec()
sn.plot_mangledspec()
sn.plot_spec()
mjdmax = get_mjdmax_BessellV(sn)[0]
filters_to_sim = convert_column_string_encoding(sn.phot.data["BessellB"]["filter"]).data
mjd_to_sim = sn.phot.data["BessellB"]["MJD"].data
flux, flux_err = coco.simulate(b"SN2009jf",
z_obs, -0.0, 0.2, 0.3, 3.1,
mjdmax, mjd_to_sim,
filters_to_sim)
plt.scatter(mjd_to_sim,sn.phot.data["BessellB"]["flux"])
plt.plot(sn.lcfit.data["BessellB"]["MJD"], sn.lcfit.data["BessellB"]["flux"])
plt.ylim(0, np.nanmax(sn.phot.data["BessellB"]["flux"])*1.1)
p = pcc.classes.PhotometryClass()
p.load_table(pcc.utils.simulate_out_to_ap_table(mjd_to_sim, flux, flux_err, filters_to_sim))
p.plot()
# s = pcc.SpectrumClass()
# s.load("SN2009jf_55106.120000.spec", directory="/Users/berto/Code/CoCo/spectra/")
# s.plot()
# s = pcc.SpectrumClass()
# s.load("SN2009jf_55108.130000.spec", directory="/Users/berto/Code/CoCo/spectra/")
# s.plot()
# s = pcc.SpectrumClass()
# s.load("SN2009jf_55114.060000.spec", directory="/Users/berto/Code/CoCo/spectra/")
# s.plot()
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Table
def load_coords(filename = "sncoordinates.list"):
"""
"""
path = os.path.abspath(os.path.join(pcc.__path__[0], os.path.pardir, filename))
coordtable = Table.read(path, format = 'ascii.commented_header')
return coordtable
# %timeit load_coords()
cootable = load_coords()
%%timeit
snname = "SN2009jf"
w = np.where(cootable["snname"] == snname)
c = SkyCoord(cootable["RA"][w], cootable["Dec"][w], frame='icrs')
c.ra.deg[0], c.dec.deg[0]
import sfdmap
m = sfdmap.SFDMap()
m.ebv(c.ra.deg[0], c.dec.deg[0], unit = 'degree')
###Output
_____no_output_____ |
Solar Power Generation Predict.ipynb | ###Markdown
Research Topic : Predict Solar Power Generation Author : Sanjoy Biswas Institute : Sylhet Engineering College Import Necessary Library
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import os
import re
import pickle
###Output
_____no_output_____
###Markdown
Import Dataset
###Code
plant1_GenData = pd.read_csv('Plant_1_Generation_Data.csv')
plant2_GenData = pd.read_csv('Plant_2_Generation_Data.csv')
plant1_WeatherSensorData = pd.read_csv('Plant_1_Weather_Sensor_Data.csv')
plant2_WeatherSensorData = pd.read_csv('Plant_2_Weather_Sensor_Data.csv')
plant1_GenData.head()
plant1_WeatherSensorData.head()
###Output
_____no_output_____
###Markdown
Explotrary Data Analysis - EDA
###Code
import matplotlib.pyplot as plt
plant1_GenData.groupby('DATE_TIME')['DATE_TIME'].count()
plant1_WeatherSensorData.groupby('DATE_TIME')['DATE_TIME'].count()
dates = []
for date in plant1_GenData['DATE_TIME']:
new_date = "2020-"
new_date += date[3:6]
new_date += date[0:2]
new_date += date[10:]
new_date += ":00"
dates.append(new_date)
plant1_GenData['DATE_TIME'] = dates
plant1_GenData.groupby('DATE_TIME')['DATE_TIME'].count()
###Output
_____no_output_____
###Markdown
Finding out which data points are missing from plant 1's Generator Data
###Code
missing = [date for date in list(plant1_WeatherSensorData['DATE_TIME']) if date not in list(plant1_GenData['DATE_TIME'])]
print(missing)
columns = ['DC_POWER', 'AC_POWER', 'DAILY_YIELD', 'TOTAL_YIELD']
means = []
for column in columns:
means.append(plant1_GenData[column].mean())
print(means)
for date in missing:
plant1_GenData = plant1_GenData.append({'DATE_TIME':date, 'PLANT_ID':'4135001', 'SOURCE_KEY':'1BY6WEcLGh8j5v7', 'DC_POWER':means[0], 'AC_POWER':means[1], 'DAILY_YIELD':means[2], 'TOTAL_YIELD':means[3]},ignore_index=True)
print([date for date in list(plant1_GenData['DATE_TIME']) if date not in list(plant1_WeatherSensorData['DATE_TIME'])])
columnsWSD = ['AMBIENT_TEMPERATURE', 'MODULE_TEMPERATURE', 'IRRADIATION']
meansWSD = []
for column in columnsWSD:
meansWSD.append(plant1_WeatherSensorData[column].mean())
print(meansWSD)
plant1_WeatherSensorData = plant1_WeatherSensorData.append({'DATE_TIME':'2020-06-03 14:00:00', 'PLANT_ID':'4135001', 'SOURCE_KEY':'HmiyD2TTLFNqkNe', columnsWSD[0]:meansWSD[0], columnsWSD[1]:meansWSD[1], columnsWSD[2]:meansWSD[2]},ignore_index=True)
plant1 = plant1_WeatherSensorData.copy()
combining = {'DC_POWER':[], 'AC_POWER':[], 'DAILY_YIELD':[], 'TOTAL_YIELD':[]}
dcPower = {}
acPower = {}
dailyYield = {}
totalYield = {}
for i in range(len(plant1_GenData['DATE_TIME'])):
entry = plant1_GenData.iloc[i]
date = entry['DATE_TIME']
if date in dcPower:
dcPower[date]['total'] += entry['DC_POWER']
dcPower[date]['num'] += 1
acPower[date]['total'] += entry['AC_POWER']
acPower[date]['num'] += 1
dailyYield[date]['total'] += entry['DAILY_YIELD']
dailyYield[date]['num'] += 1
totalYield[date]['total'] += entry['TOTAL_YIELD']
totalYield[date]['num'] += 1
else:
dcPower[date] = {'total':entry['DC_POWER'], 'num':1}
acPower[date] = {'total':entry['AC_POWER'], 'num':1}
dailyYield[date] = {'total':entry['DAILY_YIELD'], 'num':1}
totalYield[date] = {'total':entry['TOTAL_YIELD'], 'num':1}
for key in dcPower.keys():
dcPower[key]['mean'] = dcPower[key]['total']/dcPower[key]['num']
for key in acPower.keys():
acPower[key]['mean'] = acPower[key]['total']/acPower[key]['num']
for key in dailyYield.keys():
dailyYield[key]['mean'] = dailyYield[key]['total']/dailyYield[key]['num']
for key in totalYield.keys():
totalYield[key]['mean'] = totalYield[key]['total']/totalYield[key]['num']
for i in range(len(plant1['DATE_TIME'])):
entry = plant1.iloc[i]
date = entry['DATE_TIME']
combining['DC_POWER'].append(dcPower[date]['mean'])
combining['AC_POWER'].append(acPower[date]['mean'])
combining['DAILY_YIELD'].append(dailyYield[date]['mean'])
combining['TOTAL_YIELD'].append(totalYield[date]['mean'])
for key in combining.keys():
plant1[key] = combining[key]
plant1.head()
###Output
_____no_output_____
###Markdown
Data Point Analysis Using Visualization
###Code
plant1.plot(kind='scatter',x='AMBIENT_TEMPERATURE',y='MODULE_TEMPERATURE',color='red')
plant1.plot(kind='scatter',x='MODULE_TEMPERATURE',y='DC_POWER',color='red')
plant1.plot(kind='scatter',x='MODULE_TEMPERATURE',y='AC_POWER',color='red')
plant1.plot(kind='scatter',x='DC_POWER',y='IRRADIATION',color='red')
plant1.plot(kind='scatter',x='AC_POWER',y='IRRADIATION',color='red')
### Let us now process Plant 2's data in a similar fashion
plant2 = plant2_WeatherSensorData.copy()
dcPower2 = {}
acPower2 = {}
dailyYield2 = {}
totalYield2 = {}
for i in range(len(plant2_GenData['DATE_TIME'])):
entry = plant2_GenData.iloc[i]
date = entry['DATE_TIME']
if date in dcPower2:
dcPower2[date]['total'] += entry['DC_POWER']
dcPower2[date]['num'] += 1
acPower2[date]['total'] += entry['AC_POWER']
acPower2[date]['num'] += 1
dailyYield2[date]['total'] += entry['DAILY_YIELD']
dailyYield2[date]['num'] += 1
totalYield2[date]['total'] += entry['TOTAL_YIELD']
totalYield2[date]['num'] += 1
else:
dcPower2[date] = {'total':entry['DC_POWER'], 'num':1}
acPower2[date] = {'total':entry['AC_POWER'], 'num':1}
dailyYield2[date] = {'total':entry['DAILY_YIELD'], 'num':1}
totalYield2[date] = {'total':entry['TOTAL_YIELD'], 'num':1}
for key in dcPower2.keys():
dcPower2[key]['mean'] = dcPower2[key]['total']/dcPower2[key]['num']
for key in acPower2.keys():
acPower2[key]['mean'] = acPower2[key]['total']/acPower2[key]['num']
for key in dailyYield2.keys():
dailyYield2[key]['mean'] = dailyYield2[key]['total']/dailyYield2[key]['num']
for key in totalYield2.keys():
totalYield2[key]['mean'] = totalYield2[key]['total']/totalYield2[key]['num']
combining2 = {'DC_POWER':[], 'AC_POWER':[], 'DAILY_YIELD':[], 'TOTAL_YIELD':[]}
for i in range(len(plant2['DATE_TIME'])):
entry = plant2.iloc[i]
date = entry['DATE_TIME']
combining2['DC_POWER'].append(dcPower2[date]['mean'])
combining2['AC_POWER'].append(acPower2[date]['mean'])
combining2['DAILY_YIELD'].append(dailyYield2[date]['mean'])
combining2['TOTAL_YIELD'].append(totalYield2[date]['mean'])
for key in combining2.keys():
plant2[key] = combining2[key]
plant2.head()
plant2.plot(kind='scatter',x='AMBIENT_TEMPERATURE',y='MODULE_TEMPERATURE',color='red')
plant2.plot(kind='scatter',x='MODULE_TEMPERATURE',y='DC_POWER',color='red')
plant2.plot(kind='scatter',x='MODULE_TEMPERATURE',y='AC_POWER',color='red')
plant2.plot(kind='scatter',x='DC_POWER',y='IRRADIATION',color='red')
plant2.plot(kind='scatter',x='AC_POWER',y='IRRADIATION',color='red')
### Plant 2 has a weaker correlation between ambient temperature and module temperature ; module temperature and dc power ;
### module temperature and ac power ; dc power and irradiation ; ac power and irradiation.
### The conclusions that we can derive from these plots are that the Ambient temperature raises the Module temperature
### which in turn affects the DC and AC power generated by the solar power generator, the increase in DC and AC power
### being generated means an increase in Irradiation
### Now that we have formed our hypothesis, we can create a linear regression model and train it using our Plant2 Data
### in order to predict DC and AC power generation, and irradiation levels which identify the need for panel
### cleaning/maintenance. If the DC or AC power generation does not fit our trained model then we can identify
### faulty or suboptimally performing equipment.
### First we will train a model on the ambient temperature in order to predict dc power.
### We will run two iterations: 1) We use plant 1 as training data and plant 2 as test 2) vice versa.
### My hypothesis is that we will obtain better results by using plant 2 as training data since there are
### more abnormalities in plant 2's data than plant 1 hence the model will have much less variance with
### the tradeoff of slight higher bias
X_train, y_train = plant1[['AMBIENT_TEMPERATURE']], plant1['DC_POWER']
###Output
_____no_output_____
###Markdown
Let's start off with a basic linear regression model
###Code
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression()
linear_model.fit(X_train, y_train)
X_test, y_test = plant2[['AMBIENT_TEMPERATURE']], plant2['DC_POWER']
y_pred = linear_model.predict(X_test)
y_pred
### Now that we have our predictions from our basic linear regression model let's use the Mean Absolute Error
### to see how well our model did
###Output
_____no_output_____
###Markdown
Error Analysis
###Code
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test, y_pred)
### Let's compare with a DecisionTreeRegressor and XGBoostRegressor
###Output
_____no_output_____
###Markdown
Let's Decision Tree Regressor
###Code
from sklearn.tree import DecisionTreeRegressor
decisiontree_model = DecisionTreeRegressor()
decisiontree_model.fit(X_train, y_train)
y_pred = decisiontree_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Let's XGBoost Regressor
###Code
from xgboost import XGBRegressor
xgboost_model = XGBRegressor()
xgboost_model.fit(X_train, y_train)
y_pred = xgboost_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
### The DecisionTreeRegressor and XGBoostRegressor are both close. Now let us swap the training and test data
### and compare results.
X_train, y_train = plant2[['AMBIENT_TEMPERATURE']], plant2['DC_POWER']
X_test, y_test = plant1[['AMBIENT_TEMPERATURE']], plant1['DC_POWER']
xgboost_model = XGBRegressor()
xgboost_model.fit(X_train, y_train)
y_pred = xgboost_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
decisiontree_model = DecisionTreeRegressor()
decisiontree_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict Model
###Code
y_pred = decisiontree_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
### Notice the drastic performance increase on both models when we opt to using the more variable data of plant 2 compared
### to that of plant 1.
### So now we can successfully predict the dc power based on ambient temperature for plant 1 or 2. We will not predict
### the ac power for brevity, note that it is the exact same process but replacing 'DC_POWER' with 'AC_POWER'
### Now we can predict irradiation levels which identifies the need for panel cleaning/maintenance.
X_train, y_train = plant2[['AMBIENT_TEMPERATURE']], plant2['IRRADIATION']
X_test, y_test = plant1[['AMBIENT_TEMPERATURE']], plant1['IRRADIATION']
decisiontree_model = DecisionTreeRegressor()
decisiontree_model.fit(X_train, y_train)
y_pred = decisiontree_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
xgboost_model = XGBRegressor()
xgboost_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict Model
###Code
y_pred = xgboost_model.predict(X_test)
mean_absolute_error(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Generate Pickle For Deploy
###Code
import pickle
pickle.dump(decisiontree_model,open('model_solar_generation.pkl','wb'))
model = pickle.load(open('model_solar_radiation.pkl','rb'))
### In this case, the XGBoostRegressor performs better than the DecisionTreeRegressor. We are successfully able
### to identify the need for panel cleaning/maintenance by predicting the irradiation based on ambient temperature.
###Output
_____no_output_____ |
Oil Spill Classification using CNN.ipynb | ###Markdown
Importing Necessary Libraries
###Code
import numpy as np # linear algebra
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
from PIL import Image
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from keras.utils import np_utils
import cv2
oilspill = os.listdir(r"C:\Users\kenny\Desktop\DataScience Related Affair\Oil spill-data\Spill_Data\OilSpill")
print(oilspill[:10]) #the output we get are the .jpg files
nospill = os.listdir(r"C:\Users\kenny\Desktop\DataScience Related Affair\Oil spill-data\Spill_Data\NoSpill")
print('\n')
print(nospill[:10])
###Output
['Oilspill_001.jpg', 'Oilspill_002.jpg', 'Oilspill_003.jpg', 'Oilspill_004.jpg', 'Oilspill_005.jpg', 'Oilspill_006.jpg', 'Oilspill_007.jpg', 'Oilspill_008.jpg', 'Oilspill_009.jpg', 'Oilspill_010.jpg']
['NoSpill_001.jpg', 'NoSpill_002.jpg', 'NoSpill_003.jpg', 'NoSpill_004.jpg', 'NoSpill_005.jpg', 'NoSpill_006.jpg', 'NoSpill_007.jpg', 'NoSpill_008.jpg', 'NoSpill_009.jpg', 'NoSpill_010.jpg']
###Markdown
Data Preprocessing
###Code
data = []
labels = []
for img in oilspill:
try:
img_read = plt.imread(r"C:\Users\kenny\Desktop\DataScience Related Affair\Oil spill-data\Spill_Data\OilSpill" + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_aray=img_array/255
data.append(img_array)
labels.append(1)
except:
None
for img in nospill:
try:
img_read = plt.imread(r"C:\Users\kenny\Desktop\DataScience Related Affair\Oil spill-data\Spill_Data\NoSpill" + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_array= img_array/255
data.append(img_array)
labels.append(0)
except:
None
image_data = np.array(data)
labels = np.array(labels)
idx = np.arange(image_data.shape[0])
np.random.shuffle(idx)
image_data = image_data[idx]
labels = labels[idx]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(image_data, labels, test_size = 0.2, random_state = 42)
y_train = np_utils.to_categorical(y_train, 2)
y_test = np_utils.to_categorical(y_test, 2)
print(f'Shape of training image : {x_train.shape}')
print(f'Shape of testing image : {x_test.shape}')
print(f'Shape of training labels : {y_train.shape}')
print(f'Shape of testing labels : {y_test.shape}')
###Output
Shape of training image : (268, 50, 50, 3)
Shape of testing image : (68, 50, 50, 3)
Shape of training labels : (268, 2)
Shape of testing labels : (68, 2)
###Markdown
Architecture of the CNN model
###Code
import keras
from keras.layers import Dense, Conv2D
from keras.layers import Flatten
from keras.layers import MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.layers import Dropout
from keras.models import Sequential
from keras import backend as K
from keras import optimizers
inputShape= (50,50,3)
model=Sequential()
model.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis =-1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation = 'relu'))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.5))
model.add(Dense(2, activation = 'softmax'))
model.summary()
#compile the model
model.compile(loss = 'binary_crossentropy', optimizer = 'Adam', metrics = ['accuracy'])
H = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10)
print(H.history.keys())
# summarize history for accuracy
plt.plot(H.history['acc'])
plt.plot(H.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(H.history['loss'])
plt.plot(H.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper right')
plt.show()
# make predictions on the test set
preds = model.predict(x_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test.argmax(axis=1), preds.argmax(axis=1)))
from sklearn.metrics import classification_report
print(classification_report(y_test.argmax(axis=1), preds.argmax(axis=1)))
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
class_names=np.array((0,1))
plot_confusion_matrix(y_test.argmax(axis=1), preds.argmax(axis=1), classes=class_names, title='Confusion Matrix')
###Output
Confusion matrix, without normalization
[[24 0]
[ 0 44]]
|
Day_1_Intro/Short_Python_Intro/01_Jupyter-Intro.ipynb | ###Markdown
Jupyter Lab - Web IDE * By Janis Keuper* Copyright: Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/
###Code
print("hello")
###Output
_____no_output_____
###Markdown
Live Demo * Files* Notebooks* Terminal* Kernels* Keyboard Shortcuts* Save and Export Cells* Code vs Markup* Execute Cells Markup Cells Jupyter Markup Headings``` Heading 1 Heading 2 Heading 2.1 Heading 2.2``` Lists* a* bullet point list 1. numbered2. list3. is possible Text Formating* *Italic** **blod*** ***bold and italic*** Tables| This | is ||------|------|| a | table| Latex Formulars You can write formulars in text, like $e^{i\pi} + 1 = 0$.Or block equations$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$ LinksAdd links, like one to more information on latex [link](https://www.overleaf.com/learn/latex/Free_online_introduction_to_LaTeX_(part_1)) HTMLor simply write HTML code: HTML test Source CodePython:```pythonprint "Hello World"```Java Script```javascriptconsole.log("Hello World")``` Code Cells
###Code
print ("this is python code")
###Output
_____no_output_____
###Markdown
Autocompletion
###Code
a = "This is a STRING"
# press tab to see the available methods
a.l
###Output
_____no_output_____
###Markdown
API Documentation and Help
###Code
# add '?' in front of call
###Output
_____no_output_____
###Markdown
System CallsCall OS cmd-line tools via the "!" opperator:
###Code
!ls -lha
###Output
_____no_output_____
###Markdown
Magic-CommandsJupyter has build in, so-called "magic commands" (full list [https://ipython.readthedocs.io/en/stable/interactive/magics.html](here))
###Code
#single runtime
%time
for i in range(1000):
pass
#runtime test
%timeit a=34*56
# load the autoreload extension
%load_ext autoreload
# Set extension to reload modules every time before executing code
%autoreload 2
a=5
f="test"
# Outputs a list of all interactive variables in your environment
%who_ls
# Reduces the output to interactive variables of type "function"
%who_ls function
###Output
_____no_output_____ |
GradientDescent_LinearRegression/GradientDescent_LR_1variable.ipynb | ###Markdown
This notebook shows a simplified application of the gradient descent algorithm, to fit a regression model with one parameter (i.e. finding the slope of the regression line to get the best estimation).
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from IPython.display import display
%matplotlib inline
###Output
_____no_output_____
###Markdown
The data consists in a sample of 20 observations. For each individual, we have the age, the weight and the systolic pressure. Column "age" is removed from dataframe for simplicity
###Code
df=pd.read_csv("systolic_blood_press.csv")
del df['age']
display(df.head())
display(df.describe())
m = len(df) # number of observations
###Output
_____no_output_____
###Markdown
A scatter plot between weight and systolic pressure shows a strong correlation. This can be modelized with a regression line. In this example, the intercept is deliberately null to keep the example simple. The instruction sm.OLS finds the best regression line slope so that the SSR (sum of squared residuals) is the lowest possible.
###Code
# Fit regression model
plt.scatter(x=df.weight, y=df.systolic_press)
model=sm.OLS(df.systolic_press, df.weight) #no intercept
res=model.fit()
slope=res.params['weight']
# Plot the regression line
abline_values = [slope*i for i in df.weight]
plt.plot(df.weight, abline_values, color="red")
plt.xlabel("Weight")
plt.ylabel("Systolic Pressure")
# Values found by stats_model
print("Slope from statsmodels OLS:", slope)
print("SSR from statsmodels", res.ssr)
###Output
Slope from statsmodels OLS: 1.70699987852
SSR from statsmodels 842.399725998
###Markdown
The more the line fits, the less SSR is, and vice versa. This can be expressed as a cost function. This last takes domain of parameter to find in input (here the slope of the regression line), and outputs the corresponding SSR. This function has a minimum where x is the best parameter, and y the minimum corresponding SSR.
###Code
def cost_function(coefficient):
error_squared=0
# iterate through the sample and sum the squares of the distance between each point to the regression line
for row in df.itertuples():
index, systolic_press, weight = row
estimated_y=coefficient*weight
error_squared += np.square(systolic_press-estimated_y)
return error_squared/len(df)
# Visualize the cost function
cost_x = np.arange(slope-0.5, slope+0.55, 0.05)
cost_y = [cost_function(i) for i in cost_x]
plt.plot(cost_x, cost_y)
plt.xlabel("variable to find")
plt.ylabel("SSR")
print("SSR returned by the cost function:", cost_function(slope)*m)
print("SSR from statsmodels:", res.ssr)
###Output
SSR returned by the cost function: 842.399725998
SSR from statsmodels: 842.399725998
###Markdown
All the point of gradient descent is to find this minimum. Because the cost function is convex, it has a unique minimum which is local and global. Thus, one could use its derivative to find its minimum. Gradient descent starts with an initial guess and improves it at each iteration, so that it tends to the value minimizing the cost function. While approaching the minimum, the slope tends to null, and gradients are smaller and smaller (convergence).
###Code
def gradient_descent_iter(min_x):
# if alpha is too big, the algorithm will not converge and "jump" above the minimum
alpha = 0.0001
epsilon = 0.00001
max_iteration = 100 #in case of no convergence (alpha too big)
iter = 0
while True:
iter += 1
# at each gradient, it iterates through the sample (sum(..), not efficient on large samples)
derivative = sum([(min_x*df.weight[i] - df.systolic_press[i])*df.weight[i] for i in range(m)]) / m
min_x = min_x - (alpha*derivative)
if (abs(derivative) < epsilon) or (iter > max_iteration):
return min_x
min_x = gradient_descent_iter(0)
print("Found by gradient descent:", min_x)
print("From Statsmodels:", slope)
###Output
Found by gradient descent: 1.70699987834
From Statsmodels: 1.70699987852
|
09_Time_Series/Investor_Flow_of_Funds_US/Exercises.ipynb | ###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
data= pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
data.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
data['Date'] = pd.to_datetime(data.Date)
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
data.set_index('Date', inplace = True)
data
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
data.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = data.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.loc[~(monthly==0).all(axis=1)]
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('AS-JAN').sum()
yearly
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
df['Date'] = pd.to_datetime(df['Date'])
#weekly
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df.set_index('Date', inplace=True)
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
#already did
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly.loc[(monthly!=0).any(axis=1)]
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('Y').sum()
yearly
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import datetime
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv("https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv")
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
df
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df.set_index('Date', inplace=True)
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = df.index.astype('datetime64[ns]')
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
df = df.resample('M').sum()
(df!=0).any(1)
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
df.loc[(df!=0).any(1)]
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
df.resample('Y').sum()
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it.
###Code
df.index[0]
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
investor = pd.read_csv(url)
investor.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
# weekly
investor.shape
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
investor.set_index('Date',inplace=True)
investor.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
type(investor.index)
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
investor.index = pd.to_datetime(investor.index)
type(investor.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = investor.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
monthly
#monthly.dropna()
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('AS-JAN').sum()
yearly
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
#weekly
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index(df.Date, drop=True)
del df['Date']
df
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
df.index = df.index.to_period('M')
monthly = df.groupby(df.index).sum()
monthly.head()
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly.dropna()
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
# df.index = df.index.to_period('Y')
df.groupby(df.index).sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
df = pd.read_csv(url)
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
# weekly
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
import numpy as np
monthly = df.resample('M').sum()
monthly = monthly.replace(0, np.nan)
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
monthly.resample('YS').sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
called = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
called.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
# weekly data
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
called = called.set_index('Date')
called.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
called.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
called.index = pd.to_datetime(called.index)
called.index
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = called.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
year = monthly.resample('AS-JAN').sum()
year
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called * called what?
###Code
url = "https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv"
funds = pd.read_csv(url, index_col="Date", parse_dates=True)
funds.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
#returns none right now, not set, but the frequencly should be weekly
print(funds.index.inferred_freq)
###Output
None
###Markdown
Step 5. Set the column Date as the index.
###Code
# already done in import
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
# object mostly likely, i already changed it to datetime above
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
funds = funds.resample('BM').sum()
funds
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
# drop all zero rows, ~ is used for Boolean index not
funds_non_zero = funds.loc[~(funds==0).all(axis=1)]
funds_non_zero
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
funds_non_zero.resample('BY').sum()
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it.
###Code
# print the monthly average domestic equity for 2014
funds.loc['2014'][['Domestic Equity']].resample('BM').mean()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
weekly = pd.read_csv('../../data/weekly.csv', sep=',')
weekly
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset? Step 5. Set the column Date as the index.
###Code
weekly = weekly.set_index('Date')
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
weekly.index.dtype
weekly.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
weekly.index = pd.to_datetime(weekly.index)
type(weekly.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = weekly.resample('1M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('1AS').sum()
yearly
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
pd.__version__
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
wek = pd.read_csv(url)
print(wek.shape)
wek.head()
###Output
(44, 9)
###Markdown
Step 4. What is the frequency of the dataset?
###Code
wek.Date
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
if 'Date' in wek:
wek = wek.set_index('Date')
wek
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
wek.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
wek.index = pd.to_datetime(wek.index)
print(wek.index.dtype)
wek
###Output
datetime64[ns]
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
mth = wek.resample('M').sum()
mth
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
mth = mth[mth['Total Equity']!=0]
#mth = mth.drop(mth[mth['Total Equity']==0].index)
mth
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yrl = wek.resample('AS').sum()
yrl
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import datetime
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 44 entries, 0 to 43
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 44 non-null object
1 Total Equity 44 non-null int64
2 Domestic Equity 44 non-null int64
3 World Equity 44 non-null int64
4 Hybrid 44 non-null int64
5 Total Bond 44 non-null int64
6 Taxable Bond 44 non-null int64
7 Municipal Bond 44 non-null int64
8 Total 44 non-null int64
dtypes: int64(8), object(1)
memory usage: 3.2+ KB
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index(df['Date'])
df = df.drop('Date', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
df.index
type(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly[monthly != 0]
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
monthly.resample('A-JAN').sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = "https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv"
weekly = pd.read_csv(url)
weekly.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
weekly.value_counts().to_dict()
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
weekly.set_index("Date", inplace=True)
weekly.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
weekly.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
weekly.index = pd.to_datetime(weekly.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = weekly.resample("M").sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly[monthly.any(axis=1)]
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
monthly.resample("Y").sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
print('Weekly')
###Output
Weekly
###Markdown
Step 5. Set the column Date as the index.
###Code
df.set_index('Date', inplace = True)
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('Y').sum()
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it.
###Code
import seaborn as sns
sns.scatterplot(x = 'Total Equity', y = 'Domestic Equity', data = df)
sns.despine()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = "https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv"
df = pd.read_csv(url,',')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
# weekly data
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
df.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 44 entries, 2012-12-05 to 2015-04-08
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Total Equity 44 non-null int64
1 Domestic Equity 44 non-null int64
2 World Equity 44 non-null int64
3 Hybrid 44 non-null int64
4 Total Bond 44 non-null int64
5 Taxable Bond 44 non-null int64
6 Municipal Bond 44 non-null int64
7 Total 44 non-null int64
dtypes: int64(8)
memory usage: 4.3 KB
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.replace(0,np.nan)
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
df = df.resample('Y')
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
from datetime import datetime
dates = pd.DataFrame()
dates['Dates'] = df.Date
dates.head()
dates.Dates = pd.to_datetime(dates.Dates)
dates.head()
d0 = dates.Dates[0]
d1 = dates.Dates[1]
delta = d1 - d0
print(delta)
###Output
7 days 00:00:00
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
df.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
type(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = df.resample('y').sum()
yearly
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import datetime as dt
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called df
###Code
df = pd.read_csv('https://raw.githubusercontent.com' +
'/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
len(df)
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
type(df.index)
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly[(monthly != 0).all(axis=1)]
monthly.head()
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('Y').sum()
yearly.head()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
df = pd.read_csv(url)
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
df = df.set_index('Date')
df.head()
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df.index
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index? Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
type(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly.head()
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.loc[(monthly!=0).any(axis=1)]
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
yearly = monthly.resample('Y').sum()
yearly.head()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
df = pd.read_csv(url)
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
# weekly data
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
df.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
type(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.dropna()
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
year = monthly.resample('AS-JAN').sum()
year
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = 'https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv'
df = pd.read_csv(url)
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
_ = pd.to_datetime(df['Date'])
pd.Series([(_.iat[i+1]-_.iat[i]).days for i in range(df.shape[0]-1)]).value_counts()
# frequency seems to be weekly but a few exceptions
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df.set_index(df['Date'], drop=True, inplace=True)
df.drop(columns=['Date'], inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype, type(df.index)
# -> type 'object'
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
df.index.dtype, type(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
# df.resample('M').agg(np.sum)
monthly = df.resample('M').sum()
monthly
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly[monthly.sum(axis=1)!=0]
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
monthly.resample('AS-JAN').sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
#weekly
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df.set_index('Date',inplace=True,drop=True)
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
df.index.dtype
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
monthly = df.resample('M').sum()
monthly.loc[~monthly.isin([0]).any(1),:]
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly.loc[~monthly.isin([0]).any(1),:]
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
monthly.resample('Y').sum()
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
df = pd.read_csv('https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv')
df.head()
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset? Step 5. Set the column Date as the index.
###Code
date_index = pd.DataFrame(df).head()
date_index
###Output
_____no_output_____
###Markdown
[] Step 6. What is the type of the index?
###Code
type('Date')
#date_index.set_index('Date') ¿Porque no me deja seleccionar el indice de esta manera ?
###Output
_____no_output_____
###Markdown
Investor - Flow of Funds - US Introduction:Special thanks to: https://github.com/rgrp for sharing the dataset. Step 1. Import the necessary libraries
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (13, 8) #default figure size
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv). Step 3. Assign it to a variable called
###Code
url = "https://raw.githubusercontent.com/datasets/investor-flow-of-funds-us/master/data/weekly.csv"
df = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Step 4. What is the frequency of the dataset?
###Code
df.shape
df.info()
df.head()
#weekly?
###Output
_____no_output_____
###Markdown
Step 5. Set the column Date as the index.
###Code
df = df.set_index('Date')
###Output
_____no_output_____
###Markdown
Step 6. What is the type of the index?
###Code
df.index
###Output
_____no_output_____
###Markdown
Step 7. Set the index to a DatetimeIndex type
###Code
df.index = pd.to_datetime(df.index)
###Output
_____no_output_____
###Markdown
Step 8. Change the frequency to monthly, sum the values and assign it to monthly.
###Code
df.head()
monthly = df.to_period(freq='M').groupby(pd.Grouper(freq='M')).sum()
###Output
_____no_output_____
###Markdown
Step 9. You will notice that it filled the dataFrame with months that don't have any data with NaN. Let's drop these rows.
###Code
monthly = monthly[(monthly.T != 0).any()]
monthly
###Output
_____no_output_____
###Markdown
Step 10. Good, now we have the monthly data. Now change the frequency to year.
###Code
annually = df.to_period(freq='Y').groupby(pd.Grouper(freq='Y')).sum()
annually
###Output
_____no_output_____
###Markdown
BONUS: Create your own question and answer it.
###Code
sns.pairplot(monthly)
plt.show()
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/time_series_prediction/2_feature_engineering.ipynb | ###Markdown
Label and feature engineeringLearning objectives:1. Learn how to use BigQuery to build time-series features and labels for forecasting2. Learn how to visualize and explore features.3. Learn effective scaling and normalizing techniques to improve our modeling resultsNow that we have explored the data, let's start building our features, so we can build a model.Feature Engineering Use the `price_history` table, we can look at past performance of a given stock, to try to predict it's future stock price. In this notebook we will be focused on cleaning and creating features from this table. There are typically two different approaches to creating features with time-series data. **One approach** is aggregate the time-series into "static" features, such as "min_price_over_past_month" or "exp_moving_avg_past_30_days". Using this approach, we can use a deep neural network to model or any more "traditional" ML model. Notice we have essentially removed all sequention information after aggregating. This assumption can work well in practice. A **second approach** is to preserve the ordered nature of the data and use a sequential model, such as a recurrent neural network. This approach has a nice benefit that is typically requires less feature engineering. Although, training sequentially models typically takes longer.In this notebook, we will build features and also create rolling windows of the ordered time-series data.Label Engineering We are trying to predict if the stock will go up or down. In order to do this we will need to "engineer" our label by looking into the future and using that as the label. We will be using the [`LAG`](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operatorslag) function in BigQuery to do this. Visually this looks like: Import libraries; setup
###Code
PROJECT = 'asl-testing-217717' # Replace with your project ID.
import pandas as pd
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
# Allow you to easily have Python variables in SQL query.
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except:
print("Dataset already exists")
create_dataset()
###Output
Dataset already exists
###Markdown
Create time-series features and determine label based on market movement Summary of base tables
###Code
%%with_globals
%%bigquery --project {PROJECT}
SELECT count(*) as cnt
FROM `asl-testing-217717.stock_src.price_history`
%%with_globals
%%bigquery --project {PROJECT}
SELECT count(*) as cnt
FROM `asl-testing-217717.stock_src.snp500`
###Output
_____no_output_____
###Markdown
Label engineering Ultimately, we need to end up with a single label for each day. The label takes on 3 values: {`down`, `stay`, `up`}, where `down` and `up` indicates the normalized price (more on this below) went down 1% or more and up 1% or more, respectively. `stay` indicatetes the stock remained within 1%.The steps are:1. Compare close price and open price2. Compute price features using analytics functions3. Compute normalized price change (%)4. Join with S&P 500 table5. Create labels (`up`, `down`, `stay`) Compare close price and open price For each row, get the close price of yesterday and the open price of tomorrow using the [`LAG`](https://cloud.google.com/bigquery/docs/reference/legacy-sqllag) function. We will determine tomorrow's close - today's close. Shift to get tomorrow's close price. Learning objective 1
###Code
%%with_globals print
%%bigquery --project {PROJECT} df
CREATE OR REPLACE TABLE `{PROJECT}.stock_market.price_history_delta`
AS
(
WITH shifted_price AS
(
SELECT *,
(LAG(close, 1) OVER (PARTITION BY symbol order by Date DESC)) AS tomorrow_close
FROM `asl-testing-217717.stock_src.price_history`
WHERE Close > 0
)
SELECT a.*,
(tomorrow_close - Close) AS tomo_close_m_close
FROM shifted_price a
)
%%with_globals
%%bigquery --project {PROJECT}
SELECT *
FROM stock_market.price_history_delta
ORDER by Date
LIMIT 100
###Output
_____no_output_____
###Markdown
The stock market is going up on average. Learning objective 2
###Code
%%with_globals print
%%bigquery --project {PROJECT}
SELECT
AVG(close) AS avg_close,
AVG(tomorrow_close) AS avg_tomo_close,
AVG(tomorrow_close) - AVG(close) AS avg_change,
COUNT(*) cnt
FROM
stock_market.price_history_delta
###Output
_____no_output_____
###Markdown
Add time series features Compute price features using analytics functions In addition, we will also build time-series features using the min, max, mean, and std (can you think of any over functions to use?). To do this, let's use [analytic functions]() in BigQuery (also known as window functions). ```An analytic function is a function that computes aggregate values over a group of rows. Unlike aggregate functions, which return a single aggregate value for a group of rows, analytic functions return a single value for each row by computing the function over a group of input rows.``` Using the `AVG` analytic function, we can compute the average close price of a given symbol over the past week (5 business days):```python (AVG(close) OVER (PARTITION BY symbol ORDER BY Date ROWS BETWEEN 5 PRECEDING AND 1 PRECEDING))/close AS close_avg_prior_5_days``` Learning objective 1
###Code
def get_window_fxn(agg_fxn, n_days):
"""Generate a time-series feature.
E.g., Compute the average of the price over the past 5 days."""
SCALE_VALUE = 'close'
sql = '''
({agg_fxn}(close) OVER (PARTITION BY symbol
ORDER BY Date
ROWS BETWEEN {n_days} PRECEDING AND 1 PRECEDING))/{scale}
AS close_{agg_fxn}_prior_{n_days}_days'''.format(
agg_fxn=agg_fxn, n_days=n_days, scale=SCALE_VALUE)
return sql
WEEK = 5
MONTH = 20
YEAR = 52*5
agg_funcs = ('MIN', 'MAX', 'AVG', 'STDDEV')
lookbacks = (WEEK, MONTH, YEAR)
sqls = []
for fxn in agg_funcs:
for lookback in lookbacks:
sqls.append(get_window_fxn(fxn, lookback))
time_series_features_sql = ','.join(sqls) # SQL string.
def preview_query():
print(time_series_features_sql[0:10000])
preview_query()
%%with_globals print
%%bigquery --project {PROJECT}
CREATE OR REPLACE TABLE stock_market.price_features_delta
AS
SELECT *
FROM
(SELECT *,
{time_series_features_sql},
-- Also get the raw time-series values; will be useful for the RNN model.
(ARRAY_AGG(close) OVER (PARTITION BY symbol
ORDER BY Date
ROWS BETWEEN 260 PRECEDING AND 1 PRECEDING))
AS close_values_prior_260,
ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY Date) AS days_on_market
FROM stock_market.price_history_delta)
WHERE days_on_market > {YEAR}
%%bigquery --project {PROJECT}
SELECT *
FROM stock_market.price_features_delta
ORDER BY symbol, Date
LIMIT 10
###Output
_____no_output_____
###Markdown
Compute percentage change, then self join with prices from S&P index. We will also compute price change of S&P index, GSPC. We do this so we can compute the normalized percentage change. Compute normalized price change (%) Before we can create our labels we need to normalize the price change using the S&P 500 index. The normalization using the S&P index fund helps ensure that the future price of a stock is not due to larger market effects. Normalization helps us isolate the factors contributing to the performance of a stock_market. Let's use the normalization scheme from by subtracting the scaled difference in the S&P 500 index during the same time period. In Python: ```python Example calculation.scaled_change = (50.59 - 50.69)/50.69scaled_s_p = (939.38-930.09)/930.09normalized_change = scaled_change - scaled_s_passert normalized_change == ~1.2%```
###Code
scaled_change = (50.59 - 50.69)/50.69
scaled_s_p = (939.38-930.09)/930.09
normalized_change = scaled_changed - scaled_s_p
print('''
scaled change: {:2.3f}
scaled_s_p: {:2.3f}
normalized_change: {:2.3f}
'''.format(scaled_change, scaled_s_p, normalized_change))
###Output
scaled change: -0.002
scaled_s_p: 0.010
normalized_change: -0.012
###Markdown
Compute normalized price change (shown above). Let's join scaled price change (tomorrow_close/close) with the [gspc](https://en.wikipedia.org/wiki/S%26P_500_Index) symbol (symbol for the S&P index). Then we can normalize using the scheme described above. Learning objective 3
###Code
snp500_index = 'gspc'
%%with_globals print
%%bigquery --project {PROJECT}
CREATE OR REPLACE TABLE stock_market.price_features_norm_per_change
AS
WITH
all_percent_changes AS
(
SELECT *, (tomo_close_m_close/Close) AS scaled_change
FROM `{PROJECT}.stock_market.price_features_delta`
),
s_p_changes AS
(SELECT
scaled_change AS s_p_scaled_change,
date
FROM all_percent_changes
WHERE symbol="{snp500_index}")
SELECT all_percent_changes.*,
s_p_scaled_change,
(scaled_change - s_p_scaled_change)
AS normalized_change
FROM
all_percent_changes LEFT JOIN s_p_changes
--# Add S&P change to all rows
ON all_percent_changes.date = s_p_changes.date
###Output
_____no_output_____
###Markdown
Verify results
###Code
%%with_globals print
%%bigquery --project {PROJECT} df
SELECT *
FROM stock_market.price_features_norm_per_change
LIMIT 10
df.head()
###Output
_____no_output_____
###Markdown
Join with S&P 500 table and Create labels: {`up`, `down`, `stay`} Join the table with the list of S&P 500. This will allow us to limit our analysis to S&P 500 companies only.Finally we can create labels. The following SQL statement should do:```sql CASE WHEN normalized_change < -0.01 THEN 'DOWN' WHEN normalized_change > 0.01 THEN 'UP' ELSE 'STAY' END ``` Learning objective 1
###Code
down_thresh = -0.01
up_thresh = 0.01
%%with_globals print
%%bigquery --project {PROJECT} df
CREATE OR REPLACE TABLE stock_market.percent_change_sp500
AS
SELECT *,
CASE WHEN normalized_change < {down_thresh} THEN 'DOWN'
WHEN normalized_change > {up_thresh} THEN 'UP'
ELSE 'STAY'
END AS direction
FROM stock_market.price_features_norm_per_change features
INNER JOIN `asl-testing-217717.stock_src.snp500`
USING (symbol)
%%with_globals print
%%bigquery --project {PROJECT}
SELECT direction, COUNT(*) as cnt
FROM stock_market.percent_change_sp500
GROUP BY direction
%%with_globals print
%%bigquery --project {PROJECT} df
SELECT *
FROM stock_market.percent_change_sp500
LIMIT 20
df.columns
###Output
_____no_output_____
###Markdown
Feature exploration Now that we have created our recent movements of the company’s stock price, let's visualize our features. This will help us understand the data better and possibly spot errors we may have made during our calculations.As a reminder, we calculated the scaled prices 1 week, 1 month, and 1 year before the date that we are predicting at. Let's write a re-usable function for aggregating our features. Learning objective 2
###Code
def get_aggregate_stats(field, round_digit=2):
"""Run SELECT ... GROUP BY field, rounding to nearest digit."""
df = bq.query('''
SELECT {field}, COUNT(*) as cnt
FROM
(SELECT ROUND({field}, {round_digit}) AS {field}
FROM stock_market.percent_change_sp500) rounded_field
GROUP BY {field}
ORDER BY {field}'''.format(field=field,
round_digit=round_digit,
PROJECT=PROJECT)).to_dataframe()
return df.dropna()
field = 'close_AVG_prior_260_days'
CLIP_MIN, CLIP_MAX = 0.1, 4.
df = get_aggregate_stats(field)
values = df[field].clip(CLIP_MIN, CLIP_MAX)
counts = 100*df['cnt']/df['cnt'].sum() # Percentage.
ax = values.hist(weights=counts, bins=30, figsize=(10,5))
ax.set(xlabel=field, ylabel="%");
field = 'normalized_change'
CLIP_MIN, CLIP_MAX = -0.1, 0.1
df = get_aggregate_stats(field, round_digit=3)
values = df[field].clip(CLIP_MIN, CLIP_MAX)
counts = 100*df['cnt']/df['cnt'].sum() # Percentage.
ax = values.hist(weights=counts, bins=50, figsize=(10,5))
ax.set(xlabel=field, ylabel="%");
###Output
_____no_output_____
###Markdown
Let's look at results by day-of-week, month, etc.
###Code
VALID_GROUPBY_KEYS = ('DAYOFWEEK', 'DAY', 'DAYOFYEAR',
'WEEK', 'MONTH', 'QUARTER', 'YEAR')
DOW_MAPPING = {1: 'Sun', 2: 'Mon', 3: 'Tues', 4: 'Wed',
5: 'Thur', 6: 'Fri', 7: 'Sun'}
def groupby_datetime(groupby_key, field):
if groupby_key not in VALID_GROUPBY_KEYS:
raise Exception('Please use a valid groupby_key.')
sql = '''
SELECT {groupby_key}, AVG({field}) as avg_{field}
FROM
(SELECT {field},
EXTRACT({groupby_key} FROM date) AS {groupby_key}
FROM stock_market.percent_change_sp500) foo
GROUP BY {groupby_key}
ORDER BY {groupby_key} DESC'''.format(groupby_key=groupby_key,
field=field,
PROJECT=PROJECT)
print(sql)
df = bq.query(sql).to_dataframe()
if groupby_key == 'DAYOFWEEK':
df.DAYOFWEEK = df.DAYOFWEEK.map(DOW_MAPPING)
return df.set_index(groupby_key).dropna()
field = 'normalized_change'
df = groupby_datetime('DAYOFWEEK', field)
ax = df.plot(kind='barh', color='orange', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'close'
df = groupby_datetime('DAYOFWEEK', field)
ax = df.plot(kind='barh', color='orange', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('MONTH', field)
ax = df.plot(kind='barh', color='blue', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('QUARTER', field)
ax = df.plot(kind='barh', color='green', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'close'
df = groupby_datetime('YEAR', field)
ax = df.plot(kind='line', color='purple', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('YEAR', field)
ax = df.plot(kind='line', color='purple', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
###Output
SELECT YEAR, AVG(normalized_change) as avg_normalized_change
FROM
(SELECT normalized_change,
EXTRACT(YEAR FROM date) AS YEAR
FROM stock_market.percent_change_sp500) foo
GROUP BY YEAR
ORDER BY YEAR DESC
###Markdown
Whoa that's a big spike in 2010. What could be causing this? Let's investigate some individual stocks.`SUN`
###Code
sql = '''
SELECT date, close
FROM stock_market.percent_change_sp500
WHERE symbol='SUN'
ORDER BY date
'''
df = bq.query(sql).to_dataframe()
df.date = pd.to_datetime(df.date)
df.plot('date', 'close')
# Zoom in
df.iloc[-500:].plot('date', 'close')
# Zoom in more.
df.iloc[-500:-450].plot('date', 'close')
###Output
_____no_output_____
###Markdown
Let's run the same query from earlier, except we will exclude `SUN`.
###Code
field = 'normalized_change'
%%with_globals
%%bigquery --project {PROJECT} df
SELECT symbol, YEAR, AVG({field}) avg_{field},
AVG(close) avg_close, COUNT(*) n_days
FROM (SELECT EXTRACT(YEAR FROM date) AS YEAR, *
FROM stock_market.percent_change_sp500)
WHERE symbol != 'SUN'
GROUP BY symbol, YEAR
ORDER BY avg_{field} DESC
###Output
_____no_output_____
###Markdown
With out using `SUN` stock our data looks much better.
###Code
df.groupby('YEAR')['avg_' + field].mean().plot()
###Output
_____no_output_____ |
alter_approach.ipynb | ###Markdown
Amazon SageMaker IP Insights Algorithm - Alternative ApproachHere there! Welcome to this notebook and repository. This is an code example for the [blog](https://data-centric-mind.medium.com/ip-insights-model-add-some-11d993c0d860) series. Hope you enjoy this notebook with your coffee (or tea)!------- IntroductionIn the previous [blogs](https://data-centric-mind.medium.com/ip-insights-model-de-simplify-part-i-6e8067227ceb), we explained how the IP Insights model works and how the data were simulated. However, it may also occur to you, hey, we are just trying to seperate two very different distributions. For normal traffics, it was drawn from beta distribution and the IPs, or to be more accurate, the ASNs are highly repetitive. However, the malicious logins are all randomly generated. Hmm, looks like we just need a model that can separate beta and random distributions.I know this is a much simplified problem abstracted by the AWS researchers. It doesn’t make sense to argue here, as it’s like a 🐓 and 🥚 question. However, the point is, whenever presented a problem, sometimes even with solutions or directions, we should still take a step back and ask yourself — have you over complicated the problem? Did we use the model for the sake of using it ?Is there an easier approach to take instead of the using embeddings ?In this notebook, we want to explore the possiblity of creating a simple benchmark model to detect malicious login events with a non-parametric approach. Contents -------1.[ObtainASN](ObtainASN) 2.[CreateTestingData](CreateTestingData)3.[RandomnessTest](RandomnessTest)4.[AddingSpice](AddingSpice) 5.[Summary](Summary)______ Tips for AWS free tier users: 1. check the doc [here](https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc&awsf.Free%20Tier%20Types=*all&awsf.Free%20Tier%20Categories=*all) to make sure you only used the services (especially instance) covered by AWS free tier2. Don't repeat the data generation process, as S3 charged by the number of read/write.3. You can start with a much smaller set of users by set NUM_USERS = 100 ObtainASNIf you recall the [data](https://data-centric-mind.medium.com/ip-insights-model-de-simplify-part-ii-generate-simulated-data-9f3fa4dd3b5e) sampling approach, we first sample ASN for each user with certian patterns, and then an IP will be drawn from each ASN randomly. Therefore, the assumption here is, the ASN from normal users should follow a pattern of some kind. To obtain an accurate ASN informtion, we repeat the data simultion process and save the ASN into our logs. With real time logs and traffic, you can do a ASN lookup with the function [here](ASNlookup).
###Code
from os import path
import generate_data_asn
from generate_data_asn import generate_dataset
import importlib
importlib.reload(generate_data_asn)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
data_generation_file = "generate_data_asn.py" # Synthetic data generation module
log_file = "ipinsights_web_traffic_asn_added.log"
if not path.exists(data_generation_file):
print("file couldn't find")
# We simulate traffic for 10,000 users. This should yield about 3 million log lines (~700 MB).
NUM_USERS = 10000
generate_dataset(NUM_USERS, log_file)
###Output
Starting User Activity Simulation
Loaded ASN List: 827696 ASNs.
###Markdown
I have modified the generate_data.py script to include the asn into each log as the first field. Let's take a look at the log sample we created.
###Code
!wc -l $log_file
!head -3 $log_file
import pandas as pd
df_raw = pd.read_csv(
log_file,
sep=" ",
na_values="-",
header=None,
names=[
"asn",
"ip_address",
"rcf_id",
"user",
"timestamp",
"time_zone",
"request",
"status",
"size",
"referer",
"user_agent",
],
)
df_raw.head()
###Output
_____no_output_____
###Markdown
Let's do a quick summary of the ASN for each user. So as least half of the users have up to 3 asns. But there are some extreme values which are likely the travellers we have defined.
###Code
df_raw.groupby(['user']).agg({'asn':'nunique'}).value_counts().describe()
df_raw["timestamp"] = pd.to_datetime(df_raw["timestamp"], format="[%d/%b/%Y:%H:%M:%S")
df_raw["timestamp"].describe(datetime_is_numeric=True)
# Check if they are all in the same timezone
num_time_zones = len(df_raw["time_zone"].unique())
num_time_zones
from datetime import datetime
time_partition = (
datetime(2018, 11, 11, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 11)
)
## create model training and testing data
df = df_raw[["user", "ip_address", "timestamp", "asn"]]
train_df = df[df["timestamp"] <= time_partition]
test_df = df[df["timestamp"] > time_partition]
# Shuffle train data
train_df = train_df.sample(frac=1)
train_df.shape
test_df.shape
###Output
_____no_output_____
###Markdown
CreateTestingDataNext, let's create a sample with simulated bad traffic added. Therefore, we can use this data set to verify test if our approach would work.
###Code
def create_test_case(train_df, test_df, num_samples, attack_freq):
"""Creates a test case from provided train and test data frames.
This generates test case for accounts that are both in training and testing data sets.
:param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame
:param test_df: (panda.DataFrame with columns ['user', 'ip_address']) testing DataFrame
:param num_samples: (int) number of test samples to use
:param attack_freq: (float) the ratio of negative_samples:positive_samples to generate for test case
:return: DataFrame with both good and bad traffic, with labels
"""
# Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training
# Therefore, filter the test dataset for unseen accounts, as their results will not mean anything.
valid_accounts = set(train_df["user"])
valid_test_df = test_df[test_df["user"].isin(valid_accounts)]
good_traffic = valid_test_df.sample(num_samples, replace=False)
good_traffic = good_traffic[["user", "ip_address", "asn"]]
good_traffic["label"] = 0
# Generate malicious traffic
num_bad_traffic = int(num_samples * attack_freq)
bad_traffic_accounts = np.random.choice(
list(valid_accounts), size=num_bad_traffic, replace=True
)
# bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)]
# bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": bad_traffic_ips})
# bad_traffic["label"] = 1
bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)]
bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": [t[1] for t in bad_traffic_ips], "asn": [t[0] for t in bad_traffic_ips]})
bad_traffic["label"] = 1
# All traffic labels are: 0 for good traffic; 1 for bad traffic.
all_traffic = good_traffic.append(bad_traffic)
return all_traffic
NUM_SAMPLES = 100000
test_case = create_test_case(train_df, test_df, num_samples=NUM_SAMPLES, attack_freq=1)
test_case.head()
test_case['label'].value_counts()
###Output
_____no_output_____
###Markdown
Tada! We have create a balanced data set with 200, 000 login entries. Lable 0 means it's a normal login and label 1 means it's a malicious login.Next, let's take a look up how and the key feature here, how many times each user login from the same ASN, aka, count of login group by user and ASN. --------Quick note: I am a big fan of this book - , where some principlies of how to present and visualize your data is summarized in detail. Feel free to take a look at it. In my day to day, I also follow the same princeple to tell the stories about my data. The sample code of the visualization style can be found in this repo.
###Code
freq_count_good = test_case[test_case['label'] == 0].groupby('user').agg({'asn': 'count', 'label':'max'}).reset_index()
freq_count_bad = test_case[test_case['label'] == 1].groupby('user').agg({'asn': 'count', 'label':'max'}).reset_index()
good_freq = test_case[test_case['label'] == 0].groupby('user').agg({'asn': 'count'})['asn'].to_list()
bad_freq = test_case[test_case['label'] == 1].groupby('user').agg({'asn': 'count'})['asn'].to_list()
test_case[test_case['label'] == 1].groupby('user').agg({'asn': 'count'}).describe()
freq_count.head()
import numpy as np
import matplotlib
from matplotlib import transforms, pyplot as plt
import seaborn as sns
%matplotlib inline
# plt.ioff()
fig, ax1 = plt.subplots(figsize=(8.2, 6.09), dpi=150);
_=fig.subplots_adjust(left=0.104, right=0.768, top=0.751, bottom=0.187);
_=sns.distplot(freq_count_good['asn'], hist = False, kde = True, label = 'good', ax = ax1);
_=sns.distplot(freq_count_bad['asn'], hist = False, kde = True,label = 'bad', ax = ax1);
_=ax1.set_xlim([0, 80]);
_=ax1.set_ylim([0, 0.15]);
# _=plt.setp(ax1);
_=plt.xticks(rotation=45);
_=ax1.legend(loc='upper center', bbox_to_anchor=(0.5, -0.2),
fancybox=True, shadow=True, ncol=5);
_=ax1.spines['top'].set_visible(False);
_=ax1.spines['right'].set_visible(False);
# pass
###Output
_____no_output_____
###Markdown
The distribution actually met our expectations that the good one follows beta and the bad one looks quite normal with a bell shape. RandomnessTestFirst, let's try to run a rundomness test on the two groups before we further test on each user's events. Here we use the popular run test approach. To learn about it, please check out this [blog](https://data-centric-mind.medium.com/randomness-test-run-test-8333a8b956a1).
###Code
from statsmodels.sandbox.stats.runs import runstest_1samp
runstest_1samp(freq_count_good['asn'], cutoff = 'median')
runstest_1samp(freq_count_bad['asn'], cutoff = 'median')
###Output
_____no_output_____
###Markdown
The first returned value from is the z-stats and the second value in the tuple is the p-value. We can determinte if the data is randomly distributed based on the p-value and an ideal siginificance level. The null hypothesis is that the data is randomly distributed. If the p-value is less than the selected power, we can reject the null and we have reason to believe the data is not randomly genereated. Well, it looks like the first good count can be considered as rejection to null hypothesis (random distribution) at a siginificance level of 0.105. While for second test on the bad samples, we do not have enough evidence to reject it as a random distribution. Alright, it seems we can seperate the good and bad traffic based on the number of logins each user has on each ASN.________ AddingSpiceMaking sense so far? OK, now let's think about how to make this more practical. I will skip most of the detailed demo for this session, as otherwise the notebook will go endless long. However, I will provide the key thoughts and functions, you are highly encouraged to try it out yourself.1. data streamsIn reality, you recieve one login & IP instead of a sequence of logins. How can you test the randonmess in this case? A possible solution is to model n historical login events with the current new login and run the randomness test. You can use the two functions provided below to complete the test.
###Code
def check_random(df, asn_col = 'asn'):
""" Run randomness test on asn_col of the provided dataframe df.
"""
# when sample size is small, remember to use correction on the data.
s = [x for x in df[asn_col].tolist() if x is not None]
v = runstest_1samp(s, cutoff = 'median' ,correction=True)
return (v)
def get_user_p(df, label, group_col = 'user'):
""" Function return the z and p value for randomness test for each group of observations in the df
:para df: dataframe that contains all the observations
:para label: str, the column which used as labels we want to seperate
:return dataframe with randomtest zscore and pvalue added.
"""
dfg = df[df['label'] == label].groupby(group_col).apply(lambda x : check_random(x))
df_random0 = pd.DataFrame(dfg)
df_random0.columns = ['run_test']
df_random0['zscore'] = df_random0['run_test'].apply(lambda x: round(x[0], 3))
df_random0['pvalue'] = df_random0['run_test'].apply(lambda x: round(x[1],3))
return df_random0
###Output
_____no_output_____
###Markdown
ASNlookup2. ANSlookup When your data and IP are not sampled from a simulation, you need to a different approach to obtain the ASN info. You can use the code provided below to do a ASN lookup.
###Code
# install the package and needed data
!apt-get install python-pip python-dev -y build-essential
!apt-get update && apt-get install -y build-essential
!python -m pip install pyasn
!pyasn_util_download.py --latest
!pyasn_util_convert.py --single rib.20220123.1200.bz2 20220123.dat
import pyasn
asndb = pyasn.pyasn('20220123.dat')
df['asn'] = ''
for idx, row in df.iterrows():
df.at[idx, 'asn'] = asndb.lookup(row['ip_address'])[0]
###Output
_____no_output_____
###Markdown
3. Add noise Remember what we did in the very first [notebook](https://github.com/avoca-dorable/aws_ipinsights/blob/main/ipinsights-v1-add-noise.ipynb)? We added noise into our data because you are not likely having a perfect dataset in reality. Therefore, we can use the similar approach adding some noisy logins into our good sample and see it our approach still work. The modified function can be used were provided below.
###Code
import numpy as np
from generate_data_asn import draw_ip
def add_noise(train_df, user_perc, noise_per_account):
"""
This is a modified function compared to the original one.
Creates a test case from provided train and test data frames.
This generates test case for accounts that are both in training and testing data sets.
:param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame
:param user_perc: (float, [0,1]) percentage of users have noisy IPs
:param num_samples: (int) number of test samples to use
:param noise_pert_account: (int) number of random logins to added to each account
:return: DataFrame with both good and bad traffic, with labels
"""
# Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training
# Therefore, filter the test dataset for unseen accounts, as their results will not mean anything.
valid_accounts = set(train_df["user"])
# Generate malicious traffic
num_bad_account = int(len(valid_accounts) * user_perc )
bad_traffic_accounts = np.random.choice(
list(valid_accounts), size=num_bad_account, replace=False
)
bad_traffic_ips = [draw_ip() for i in range(num_bad_account * noise_per_account)]
bad_traffic = pd.DataFrame({"user": list(bad_traffic_accounts) * noise_per_account, "ip_address": [t[1] for t in bad_traffic_ips], "asn": [t[0] for t in bad_traffic_ips]})
bad_traffic["label"] = 1
# All traffic labels are: 0 for good traffic; 1 for bad traffic.
return bad_traffic
noise_df = add_noise(train_df, user_perc = 0.005, noise_per_account = 20)
noise_df.head()
noise_train = pd.concat([noise_df[['user', 'ip_address']], train_df], ignore_index = True)
###Output
_____no_output_____ |
Machine Learning/Kmeans_Clustering/Kmeans_Clustering_Algorithm.ipynb | ###Markdown
CLUSTERING - KMEANS ALGO. (PYTHON). Submitted By : Tanmay Pandey * From the given ‘Iris’ dataset, predict the optimum number of clusters and represent it visually.
###Code
# importing the reqired libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import seaborn as sns
# Importing the Dataset.
url = "iris.csv"
data = pd.read_csv(url)
###Output
_____no_output_____
###Markdown
Analysing Dataset
###Code
# Checking the shape of data set
data.shape
###Output
_____no_output_____
###Markdown
* Here we have 150 rows and 6 columns
###Code
# Displying first 10 records of data.
data.head(10)
# Describing the dataset i.e. finding basic mathmatical operation on data.
data.describe()
# Getting information about data
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Id 150 non-null int64
1 SepalLengthCm 150 non-null float64
2 SepalWidthCm 150 non-null float64
3 PetalLengthCm 150 non-null float64
4 PetalWidthCm 150 non-null float64
5 Species 150 non-null object
dtypes: float64(4), int64(1), object(1)
memory usage: 7.2+ KB
###Markdown
* Here we have 150 records with the respective data types and all values are not null.
###Code
# Checking unique species of iris and their count
data['Species'].value_counts()
###Output
_____no_output_____
###Markdown
* So we have 50 records of each specie.
###Code
# checking the variation of species with given parameters
sns.pairplot(data, hue='Species')
###Output
_____no_output_____
###Markdown
* we can see that the charactersics of iris setosa is different form virginica and varsicolor
###Code
# Dropping the not required columns from data
sp = data["Species"].values
data.drop("Id", inplace=True, axis=1)
data.drop("Species", inplace=True, axis=1)
data.head(5)
# checking correlation between variables
data.corr()
###Output
_____no_output_____
###Markdown
* here we can see that petal length and sepallength are correlated, petal width and sepallength are correlated * petal length and petal width are correlated.
###Code
# Finding best no of clusters for clustering by elbow method
x = data.iloc[:,[0,1,2]].values
R = range(1,10)
Sum_of_Squared_Distance = []
for k in R:
km = KMeans(n_clusters = k)
km = km.fit(x)
Sum_of_Squared_Distance.append(km.inertia_)
# Visualizing the Optimum Clusters
plt.plot(R, Sum_of_Squared_Distance, 'go--', color="green")
plt.title("Optimum Clusters By Elbow Method")
plt.xlabel("No of Clusters")
plt.ylabel("Sum Of Squared Distance")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
* Here we are getting the optimum clusters as 3 as drop after 3 is minimum Training And Testing The Model
###Code
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
predictions = kmeans.fit_predict(x)
# Checking The Values
predictions
# Visualising the clusters predicted by our model
plt.figure(figsize=(10,7))
plt.scatter(x[predictions == 0, 0], x[predictions == 0, 1],
s = 100, c = 'red', label = 'Iris-setosa')
plt.scatter(x[predictions == 1, 0], x[predictions == 1, 1],
s = 100, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[predictions == 2, 0], x[predictions == 2, 1],
s = 100, c = 'green', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'yellow', label = 'Centroids')
plt.grid()
plt.legend()
plt.show()
Species = ["Iris-setosa","Iris-versicolor" ,"Iris-virginica"]
Pred_Species = []
for i in predictions:
Pred_Species.append(Species[i])
sns.countplot(Pred_Species)
plt.xlabel("Species")
plt.ylabel("Predicted")
plt.show()
###Output
C:\Users\Tanmay\anaconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
|
docs/examples/tutorials.ipynb | ###Markdown
Reading and visualising data
###Code
import numpy as np
import tools21cm as t2c
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Different simulations codes write their output in different formats. It is same for observations, which will differ based of the observation facility and research group. One has to define a function that is specific to that case.In order to manipulate and analyse data with tools21cm, we want the data to be read in as numpy array. Reading dataHere we read the ionisation fraction data cube produced with the [C2Ray](https://github.com/garrelt/C2-Ray3Dm) code. For the density field, we will consider the gridded density field created by an N-body, [CubeP3M](https://github.com/jharno/cubep3m), which were used by [C2Ray](https://github.com/garrelt/C2-Ray3Dm) code as input.We provide few simulation output for test: https://doi.org/10.5281/zenodo.3953639
###Code
path_to_datafiles = './data/'
z = 7.059
t2c.set_sim_constants(244) # This line is only useful while working with C2Ray simulations.
x_file = t2c.XfracFile(path_to_datafiles+'xfrac3d_7.059.bin')
d_file = t2c.DensityFile(path_to_datafiles+'7.059n_all.dat')
xfrac = x_file.xi
dens = d_file.cgs_density
###Output
_____no_output_____
###Markdown
The above function `set_sim_constants` is useful only for `C2Ray` simulation outputs. This function takes as its only parameter the box side in cMpc/h and sets simulations constants.See [here](https://tools21cm.readthedocs.io/contents.htmlmodule-t2c.read_files) for more data reading functions. Visualising the data You can of course plot the data you read using your favorite plotting software. For example, if you have `matplotlib` installed.
###Code
import matplotlib.pyplot as plt
box_dims = 244/0.7 # Length of the volume along each direction in Mpc.
dx, dy = box_dims/xfrac.shape[1], box_dims/xfrac.shape[2]
y, x = np.mgrid[slice(dy/2,box_dims,dy),
slice(dx/2,box_dims,dx)]
plt.rcParams['figure.figsize'] = [16, 6]
plt.suptitle('$z={0:.1f},~x_v=${1:.2f}'.format(z,xfrac.mean()))
plt.subplot(121)
plt.title('Density contrast slice')
plt.pcolormesh(x, y, dens[0]/dens.mean()-1)
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar()
plt.subplot(122)
plt.title('Ionisation fraction slice')
plt.pcolormesh(x, y, xfrac[0])
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
21 cm brightness temperature We can construct the 21 cm brightness temperature from the density field and ionisation fraction field using `calc_dt`. Due to the absence of zero baseline, the mean signal will be subtracted from each frequency channel. One can use `subtract_mean_signal` to add this effect.
###Code
dT = t2c.calc_dt(xfrac, dens, z)
print('Mean of first channel: {0:.4f}'.format(dT[0].mean()))
dT_subtracted = t2c.subtract_mean_signal(dT, 0)
print('Mean of first channel: {0:.4f}'.format(dT_subtracted[0].mean()))
plt.rcParams['figure.figsize'] = [6, 5]
plt.title('21 cm signal')
plt.pcolormesh(x, y, dT_subtracted[0,:,:])
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar(label='mK')
plt.show()
###Output
_____no_output_____
###Markdown
21 cm power spectrum One of the most interesting metric to analyse this field is the power spectrum. Here we estimate the spherically average power spectrum using `power_spectrum_1d` function. The function needs the length of the `input_array` in Mpc (or Mpc/h) through `box_dims` parameters. This is used to calculate the wavenumbers (k). The unit of the output k values will be 1/Mpc (or h/Mpc). If the `input_array` has unequal length in each direction, then one can provide `box_dims` with a list containing the lengths in each direction.
###Code
box_dims = 244/0.7 # Length of the volume along each direction in Mpc.
ps, ks = t2c.power_spectrum_1d(dT_subtracted, kbins=15, box_dims=box_dims)
plt.rcParams['figure.figsize'] = [7, 5]
plt.title('Spherically averaged power spectrum.')
plt.loglog(ks, ps*ks**3/2/np.pi**2)
plt.xlabel('k (Mpc$^{-1}$)')
plt.ylabel('P(k) k$^{3}$/$(2\pi^2)$')
plt.show()
###Output
_____no_output_____
###Markdown
Redshift-space distortions The 21 cm signal will be modified while mapping from real space to redshift space due to peculiar velocities ([Mao et al. 2012](https://ui.adsabs.harvard.edu/abs/2012MNRAS.422..926M/abstract)). The `VelocityFile` function is used to read the velocity files produced by `CubeP3M`. We need the velocities in km/s as a numpy array of shape `(3,nGridx,nGridy,nGridyz)`, where the first axis represent the velocity component along x, y and z spatial direction. The `get_kms_from_density` attribute gives such a numpy array.
###Code
v_file = t2c.VelocityFile(path_to_datafiles+'7.059v_all.dat')
kms = v_file.get_kms_from_density(d_file)
###Output
_____no_output_____
###Markdown
The `get_distorted_dt` function will distort the signal.
###Code
dT_rsd = t2c.get_distorted_dt(dT, kms, z,
los_axis=0,
velocity_axis=0,
num_particles=20)
###Output
_____no_output_____
###Markdown
Spherically averaged power spectrum of the 21 cm signal with RSD.
###Code
ps_rsd, ks_rsd = t2c.power_spectrum_1d(dT_rsd, kbins=15, box_dims=box_dims)
plt.rcParams['figure.figsize'] = [7, 5]
plt.title('Spherically averaged power spectrum.')
plt.loglog(ks, ps*ks**3/2/np.pi**2, label='no RSD')
plt.loglog(ks_rsd, ps_rsd*ks_rsd**3/2/np.pi**2, linestyle='--', label='RSD')
plt.xlabel('k (Mpc$^{-1}$)')
plt.ylabel('P(k) k$^{3}$/$(2\pi^2)$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We see in the above figure that the spherically averaged power spectrum has changed after RSD is implemented.However, a better marker of RSD in 21 cm signal is the power spectrum's $\mu (\equiv k_\parallel/k)$ dependence ([Jensen et al. 2013](https://academic.oup.com/mnras/article/435/1/460/1123792)). The power spectrum of 21 cm signal with RSD will have the following dependence ([Barkana & Loeb 2005](https://iopscience.iop.org/article/10.1086/430599)),$P(k,\mu) = P_0 + \mu^2P_2 +\mu^4P_4$.We can calculate $P(k,\mu)$ using `power_spectrum_mu` function.
###Code
Pk, mubins, kbins, nmode = t2c.power_spectrum_mu(
dT,
los_axis=0,
mubins=8,
kbins=15,
box_dims=box_dims,
exclude_zero_modes=True,
return_n_modes=True,
absolute_mus=False,
)
Pk_rsd, mubins_rsd, kbins_rsd, nmode_rsd = t2c.power_spectrum_mu(
dT_rsd,
los_axis=0,
mubins=8,
kbins=15,
box_dims=box_dims,
exclude_zero_modes=True,
return_n_modes=True,
absolute_mus=False,
)
plt.rcParams['figure.figsize'] = [7, 5]
ii = 8
plt.title('$k={0:.2f}$'.format(kbins[ii]))
plt.plot(mubins, Pk[:,ii], label='no RSD')
plt.plot(mubins_rsd, Pk_rsd[:,ii], linestyle='--', label='RSD')
plt.xlabel('$\mu$')
plt.ylabel('$P(k,\mu)$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Bubble size distribution The bubble (HII regions) size distribution is an intersting probe of the reionization process ([Giri et al. 2018](https://ui.adsabs.harvard.edu/abs/2018MNRAS.473.2949G/abstract)).`Tools21cm` contains three methods to determine the size distributions, which are Friends-of-friends, Spherical average and mean free path approach.In this tutorial, we will take the ionisation fraction field and assume all the pixels with value $>0.5$ as ionised.
###Code
xHII = xfrac>0.5
boxsize = 244/0.7 # in Mpc
###Output
_____no_output_____
###Markdown
Mean free path (e.g. [Mesinger & Furlanetto 2007](https://iopscience.iop.org/article/10.1086/521806/meta))
###Code
r_mfp, dn_mfp = t2c.mfp(xHII, boxsize=boxsize, iterations=1000000)
plt.rcParams['figure.figsize'] = [7, 5]
plt.semilogx(r_mfp, dn_mfp)
plt.xlabel('$R$ (Mpc)')
plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$')
plt.title('Mean free path method')
plt.show()
###Output
_____no_output_____
###Markdown
Spherical average (e.g. [Zahn et al. 2007](https://ui.adsabs.harvard.edu/abs/2007ApJ...654...12Z/abstract))
###Code
r_spa, dn_spa = t2c.spa(xHII, boxsize=boxsize, nscales=20)
plt.rcParams['figure.figsize'] = [7, 5]
plt.semilogx(r_spa, dn_spa)
plt.xlabel('$R$ (Mpc)')
plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$')
plt.title('Spherical Average method')
plt.show()
###Output
_____no_output_____
###Markdown
Friends of friends (e.g. [Iliev et al. 2006](https://ui.adsabs.harvard.edu/abs/2006MNRAS.369.1625I/abstract))
###Code
labelled_map, volumes = t2c.fof(xHII)
fof_dist = t2c.plot_fof_sizes(volumes, bins=30, boxsize=boxsize)
plt.rcParams['figure.figsize'] = [7, 5]
plt.step(fof_dist[0], fof_dist[1])
plt.xscale('log')
plt.yscale('log')
plt.ylim(fof_dist[2],1)
plt.xlabel('$V$ (Mpc$^3$)')
plt.ylabel('$V^2\mathrm{d}P/\mathrm{d}V$')
plt.title('Friends of friends method')
plt.show()
###Output
_____no_output_____
###Markdown
Reading and visualising data
###Code
import numpy as np
import tools21cm as t2c
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Different simulations codes write their output in different formats. It is same for observations, which will differ based of the observation facility and research group. One has to define a function that is specific to that case.In order to manipulate and analyse data with tools21cm, we want the data to be read in as numpy array. Reading dataHere we read the ionisation fraction data cube produced with the [C2Ray](https://github.com/garrelt/C2-Ray3Dm) code. For the density field, we will consider the gridded density field created by an N-body, [CubeP3M](https://github.com/jharno/cubep3m), which were used by [C2Ray](https://github.com/garrelt/C2-Ray3Dm) code as input.We provide few simulation output for test: https://doi.org/10.5281/zenodo.3953639
###Code
path_to_datafiles = './data/'
z = 7.059
t2c.set_sim_constants(244) # This line is only useful while working with C2Ray simulations.
x_file = t2c.XfracFile(path_to_datafiles+'xfrac3d_7.059.bin')
d_file = t2c.DensityFile(path_to_datafiles+'7.059n_all.dat')
xfrac = x_file.xi
dens = d_file.cgs_density
###Output
_____no_output_____
###Markdown
The above function `set_sim_constants` is useful only for `C2Ray` simulation outputs. This function takes as its only parameter the box side in cMpc/h and sets simulations constants.See [here](https://tools21cm.readthedocs.io/contents.htmlmodule-t2c.read_files) for more data reading functions. Visualising the data You can of course plot the data you read using your favorite plotting software. For example, if you have `matplotlib` installed.
###Code
import matplotlib.pyplot as plt
box_dims = 244/0.7 # Length of the volume along each direction in Mpc.
dx, dy = box_dims/xfrac.shape[1], box_dims/xfrac.shape[2]
y, x = np.mgrid[slice(dy/2,box_dims,dy),
slice(dx/2,box_dims,dx)]
plt.rcParams['figure.figsize'] = [16, 6]
plt.suptitle('$z={0:.1f},~x_v=${1:.2f}'.format(z,xfrac.mean()))
plt.subplot(121)
plt.title('Density contrast slice')
plt.pcolormesh(x, y, dens[0]/dens.mean()-1)
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar()
plt.subplot(122)
plt.title('Ionisation fraction slice')
plt.pcolormesh(x, y, xfrac[0])
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
21 cm brightness temperature We can construct the 21 cm brightness temperature from the density field and ionisation fraction field using `calc_dt`. Due to the absence of zero baseline, the mean signal will be subtracted from each frequency channel. One can use `subtract_mean_signal` to add this effect.
###Code
dT = t2c.calc_dt(xfrac, dens, z)
print('Mean of first channel: {0:.4f}'.format(dT[0].mean()))
dT_subtracted = t2c.subtract_mean_signal(dT, 0)
print('Mean of first channel: {0:.4f}'.format(dT_subtracted[0].mean()))
plt.rcParams['figure.figsize'] = [6, 5]
plt.title('21 cm signal')
plt.pcolormesh(x, y, dT_subtracted[0,:,:])
plt.xlabel('$x$ (Mpc)')
plt.ylabel('$y$ (Mpc)')
plt.colorbar(label='mK')
plt.show()
###Output
_____no_output_____
###Markdown
21 cm power spectrum One of the most interesting metric to analyse this field is the power spectrum. Here we estimate the spherically average power spectrum using `power_spectrum_1d` function. The function needs the length of the `input_array` in Mpc (or Mpc/h) through `box_dims` parameters. This is used to calculate the wavenumbers (k). The unit of the output k values will be 1/Mpc (or h/Mpc). If the `input_array` has unequal length in each direction, then one can provide `box_dims` with a list containing the lengths in each direction.
###Code
box_dims = 244/0.7 # Length of the volume along each direction in Mpc.
ps, ks = t2c.power_spectrum_1d(dT_subtracted, kbins=15, box_dims=box_dims)
plt.rcParams['figure.figsize'] = [7, 5]
plt.title('Spherically averaged power spectrum.')
plt.loglog(ks, ps*ks**3/2/np.pi**2)
plt.xlabel('k (Mpc$^{-1}$)')
plt.ylabel('P(k) k$^{3}$/$(2\pi^2)$')
plt.show()
###Output
_____no_output_____
###Markdown
Redshift-space distortions The 21 cm signal will be modified while mapping from real space to redshift space due to peculiar velocities ([Mao et al. 2012](https://ui.adsabs.harvard.edu/abs/2012MNRAS.422..926M/abstract)). The `VelocityFile` function is used to read the velocity files produced by `CubeP3M`. We need the velocities in km/s as a numpy array of shape `(3,nGridx,nGridy,nGridyz)`, where the first axis represent the velocity component along x, y and z spatial direction. The `get_kms_from_density` attribute gives such a numpy array.
###Code
v_file = t2c.VelocityFile(path_to_datafiles+'7.059v_all.dat')
kms = v_file.get_kms_from_density(d_file)
###Output
_____no_output_____
###Markdown
The `get_distorted_dt` function will distort the signal.
###Code
dT_rsd = t2c.get_distorted_dt(dT, kms, z,
los_axis=0,
velocity_axis=0,
num_particles=20)
###Output
_____no_output_____
###Markdown
Spherically averaged power spectrum of the 21 cm signal with RSD.
###Code
ps_rsd, ks_rsd = t2c.power_spectrum_1d(dT_rsd, kbins=15, box_dims=box_dims)
plt.rcParams['figure.figsize'] = [7, 5]
plt.title('Spherically averaged power spectrum.')
plt.loglog(ks, ps*ks**3/2/np.pi**2, label='no RSD')
plt.loglog(ks_rsd, ps_rsd*ks_rsd**3/2/np.pi**2, linestyle='--', label='RSD')
plt.xlabel('k (Mpc$^{-1}$)')
plt.ylabel('P(k) k$^{3}$/$(2\pi^2)$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We see in the above figure that the spherically averaged power spectrum has changed after RSD is implemented.However, a better marker of RSD in 21 cm signal is the power spectrum's $\mu (\equiv k_\parallel/k)$ dependence ([Jensen et al. 2013](https://academic.oup.com/mnras/article/435/1/460/1123792)). The power spectrum of 21 cm signal with RSD will have the following dependence ([Barkana & Loeb 2005](https://iopscience.iop.org/article/10.1086/430599)),$P(k,\mu) = P_0 + \mu^2P_2 +\mu^4P_4$.We can calculate $P(k,\mu)$ using `power_spectrum_mu` function.
###Code
Pk, mubins, kbins, nmode = t2c.power_spectrum_mu(
dT,
los_axis=0,
mubins=8,
kbins=15,
box_dims=box_dims,
exclude_zero_modes=True,
return_n_modes=True,
absolute_mus=False,
)
Pk_rsd, mubins_rsd, kbins_rsd, nmode_rsd = t2c.power_spectrum_mu(
dT_rsd,
los_axis=0,
mubins=8,
kbins=15,
box_dims=box_dims,
exclude_zero_modes=True,
return_n_modes=True,
absolute_mus=False,
)
plt.rcParams['figure.figsize'] = [7, 5]
ii = 8
plt.title('$k={0:.2f}$'.format(kbins[ii]))
plt.plot(mubins, Pk[:,ii], label='no RSD')
plt.plot(mubins_rsd, Pk_rsd[:,ii], linestyle='--', label='RSD')
plt.xlabel('$\mu$')
plt.ylabel('$P(k,\mu)$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Bubble size distribution The bubble (HII regions) size distribution is an intersting probe of the reionization process ([Giri et al. 2018](https://ui.adsabs.harvard.edu/abs/2018MNRAS.473.2949G/abstract)).`Tools21cm` contains three methods to determine the size distributions, which are Friends-of-friends, Spherical average and mean free path approach.In this tutorial, we will take the ionisation fraction field and assume all the pixels with value $>0.5$ as ionised.
###Code
xHII = xfrac>0.5
boxsize = 244/0.7 # in Mpc
###Output
_____no_output_____
###Markdown
Mean free path (e.g. [Mesinger & Furlanetto 2007](https://iopscience.iop.org/article/10.1086/521806/meta))
###Code
r_mfp, dn_mfp = t2c.mfp(xHII, boxsize=boxsize, iterations=1000000)
plt.rcParams['figure.figsize'] = [7, 5]
plt.semilogx(r_mfp, dn_mfp)
plt.xlabel('$R$ (Mpc)')
plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$')
plt.title('Mean free path method')
plt.show()
###Output
_____no_output_____
###Markdown
Spherical average (e.g. [Zahn et al. 2007](https://ui.adsabs.harvard.edu/abs/2007ApJ...654...12Z/abstract))
###Code
r_spa, dn_spa = t2c.spa(xHII, boxsize=boxsize, nscales=20)
plt.rcParams['figure.figsize'] = [7, 5]
plt.semilogx(r_spa, dn_spa)
plt.xlabel('$R$ (Mpc)')
plt.ylabel('$R\mathrm{d}P/\mathrm{d}R$')
plt.title('Spherical Average method')
plt.show()
###Output
_____no_output_____
###Markdown
Friends of friends (e.g. [Iliev et al. 2006](https://ui.adsabs.harvard.edu/abs/2006MNRAS.369.1625I/abstract))
###Code
labelled_map, volumes = t2c.fof(xHII)
fof_dist = t2c.plot_fof_sizes(volumes, bins=30, boxsize=boxsize)
plt.rcParams['figure.figsize'] = [7, 5]
plt.step(fof_dist[0], fof_dist[1])
plt.xscale('log')
plt.yscale('log')
plt.ylim(fof_dist[2],1)
plt.xlabel('$V$ (Mpc$^3$)')
plt.ylabel('$V^2\mathrm{d}P/\mathrm{d}V$')
plt.title('Friends of friends method')
plt.show()
###Output
_____no_output_____ |
Courses/Intermediate_ML_kaggle.ipynb | ###Markdown
Intermediate Machine Learning - Kaggle CourseThis notebook is just for **reference sake.****The course and all content here is provided by Alexis Cook [here](https://www.kaggle.com/learn/intermediate-machine-learning)****Note that there will be error in each code block because I haven't plugged in any data, just the syntax is written down** 1. Intro What is covered in this course?- handling missing values- categorical variables- ML pipelines- cross validation- XGBoost- Data leakage 2. Missing valuesThere are 2 approaches in dealing with missing values -**1. Drop columns with missing values**This approach is risky as potentially useful column with just a few missing values could be dropped.
###Code
cols_with_misssing = [cols for col in X_train.columns if X_train[col].isnull.any()]
reduced_X_train = X_train.drop(cols_with_missing,axis=1)
###Output
_____no_output_____
###Markdown
**2. A better option : imputation**Imputation fills missing value with some number , for example the mean value.Values won't be exact but yeilds better result than just dropping the column.
###Code
from sklearn.impute import SimpleImputer
myimputer = SimpleImputer()
pd.DataFrame(myimputer.fit_transform(X_train))
###Output
_____no_output_____
###Markdown
**3. An extension to imputation**Imputation is standard approach, but the values could be higher or lower than actual value. So adding an additional columnn displaying rows which were originally missing(True/False) could be useful. **Side note -**How to use only numerical predictors in a data?
###Code
# like this
X = X_full.select_dtypes(exclude=['object'])
###Output
_____no_output_____
###Markdown
**Full code -**
###Code
# Make copy to avoid changing original data (when imputing)
X_train_plus = X_train.copy()
X_valid_plus = X_valid.copy()
# Make new columns indicating what will be imputed
for col in cols_with_missing:
X_train_plus[col + '_was_missing'] = X_train_plus[col].isnull()
X_valid_plus[col + '_was_missing'] = X_valid_plus[col].isnull()
# Imputation
my_imputer = SimpleImputer()
imputed_X_train_plus = pd.DataFrame(my_imputer.fit_transform(X_train_plus))
imputed_X_valid_plus = pd.DataFrame(my_imputer.transform(X_valid_plus))
# Imputation removed column names; put them back
imputed_X_train_plus.columns = X_train_plus.columns
imputed_X_valid_plus.columns = X_valid_plus.columns
print("MAE from Approach 3 (An Extension to Imputation):")
print(score_dataset(imputed_X_train_plus, imputed_X_valid_plus, y_train, y_valid))
###Output
_____no_output_____
###Markdown
** judging which method to apply**If there are only few missing values in the data, then it is not advisable to drop complete column.Instead impute missing values. Use score_dataset() to compare the effects of different missing values handling approaches
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)
###Output
_____no_output_____
###Markdown
**Side Note** : In one of the example, we found that imputation performed worse than droping despite low number of missing valueswhat could be the reason?- We see that there are some fields lile GarageYrBlt , taking mean of this might not be the best idea.- There are oher criteria such as median, min, however it is not clear what would be the best criteria to choose.- After cross checking with mae score , median did produce better result.
###Code
myimputer = SimpleImputer(strategy='median')
myimputer
###Output
_____no_output_____
###Markdown
3. Categorical variablesCategorical data needs to be preprocessed before plugging them in the datasetThere are 3 approachesWe will use score_dataset() to test quality of each approach
###Code
# Get list of categorical variables
s = (X_train.dtypes == 'object')
object_cols = list(s[s].index)
###Output
_____no_output_____
###Markdown
**1. Drop categorical variables**This appoach only works if the variables do not contain any useful information.
###Code
drop_X_train = X_train.select_dtypes(exclude=['object'])
drop_X_valid = X_valid.select_dtypes(exclude=['object'])
print("MAE from Approach 1 (Drop categorical variables):")
print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
###Output
_____no_output_____
###Markdown
**2. Label encoding**- Assigns each unique value to a different integer- This works well with ordinal data (data which have ranking or order)- eg: "Never" (0) < "Rarely" (1) < "Most days" (2) < "Every day" (3).- works well with tree-based models (decision tree,random forest)
###Code
from sklearn.preprocessing import LabelEncoder
label_X_train = X_train.copy()
lebel_X_valied = X_valid.copy()
label_encoder = LabelEncoder()
for col in object_cols:
label_X_train[col] = LabelEncoder.fit_transform(X_train[cols])
label_X_valid[col] = LabelEncoder.transform(X_valid[cols])
rint("MAE from Approach 2 (Label Encoding):")
print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
###Output
_____no_output_____
###Markdown
**3. One hot encoding**- Creates new column for each type of value in the original data.- For example a column containing "red","yellow","green" is split up into 3 columns .- each column will have two values 1 or 0, for presence of the color .- Good for vairiables without ranking (nominal variables)- does not perform well if categorical variable takes on a large number of values Some parameters - - We set handle_unknown='ignore' to avoid errors when the validation data contains classes that aren't represented in the training data, and- setting sparse=False ensures that the encoded columns are returned as a numpy array (instead of a sparse matrix)
###Code
from sklearn.preprocessing import OneHotEncoder
OH_encoder = OneHotEncoder(handle_unknown='ignore',sparse = 'false')
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))
OH_cols_valid = pd.DataFrame(OH.transform(X_valid[Object_cols]))
OH_cols_train.index = X_train.index
# Remove categorical columns (will replace with one-hot encoding)
num_X_train = X_train.drop(object_cols, axis=1)
num_X_valid = X_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)
print("MAE from Approach 3 (One-Hot Encoding):")
print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
###Output
_____no_output_____
###Markdown
**Best approach?**Dropping the column performs the worrst. Out of remaining two methods, one hot encoding ususally performs the best but depends case by case. **Side note**Sometimes columnns in training data is not present in validation data, in which case you cna didvide the columns into good_label_cols and bad label columns.And drop the bad label columns
###Code
# All categorical columns
object_cols = [col for col in X_train.columns if X_train[col].dtype == "object"]
# Columns that can be safely label encoded
good_label_cols = [col for col in object_cols if
set(X_train[col]) == set(X_valid[col])]
# Problematic columns that will be dropped from the dataset
bad_label_cols = list(set(object_cols)-set(good_label_cols))
print('Categorical columns that will be label encoded:', good_label_cols)
print('\nCategorical columns that will be dropped from the dataset:', bad_label_cols)
###Output
_____no_output_____
###Markdown
**cardinality of categorical variable**cardinality is the number of unique labels for each column
###Code
# Get number of unique entries in each column with categorical data
object_nunique = list(map(lambda col: X_train[col].nunique(), object_cols))
d = dict(zip(object_cols, object_nunique))
# Print number of unique entries by column, in ascending order
sorted(d.items(), key=lambda x: x[1])
###Output
_____no_output_____
###Markdown
We can make use of this information to figure our which colummns can be one-hot-encoded.For high cardinality columns we do not use one-hot-encoding. We will keep this value as 10.
###Code
# Columns that will be one-hot encoded
low_cardinality_cols = [col for col in object_cols if X_train[col].nunique() < 10]
###Output
_____no_output_____
###Markdown
4. Pipelines"Pipelines are a simple way to keep your data preprocessing and modeling code organized. Specifically, a pipeline bundles preprocessing and modeling steps so you can use the whole bundle as if it were a single step."Pros - - cleaner code- fewwer bugs- easy to productionise- more options for model validation **1.defining preprocessing steps**Just like we have pipeline for bundling all the steps, we have ColumnTransformer to bundle together preprocessing steps
###Code
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
categorical_cols = [cname for cname in X_train_full.columns if
X_train_full[cname].nunique() < 10 and
X_train_full[cname].dtype == "object"]
# Select numerical columns
numerical_cols = [cname for cname in X_train_full.columns if
X_train_full[cname].dtype in ['int64', 'float64']]
#preprocessing for numerical data
numerical_transformer = SimpleImputer(strategy='median')
#preprocessing for categorical data
categorical_trainsformer = Pipeline(steps=[
('imputer',SimpleImputer(strategy='most_frequent')),
('Onehot',OneHotEncoder(handle_unknown='ignore'))
])
#Bundle preprocessing for numerical and categorical
preprocessor = ColumnTransformer(
transformers=[
('num',numerical_transformer,numerical_cols),
('cat',categorical_trainsformer,categorical_cols)
])
###Output
_____no_output_____
###Markdown
**2.Define the model**
###Code
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=200, random_state=0)
###Output
_____no_output_____
###Markdown
**3.Create and evaluate the pipeline**Use pipeline to bundle preprocessing and model steps- With the pipeline, we preprocess the training data and fit the model in a single line of code- With the pipeline, we supply the unprocessed features in X_valid to the predict() command, and the pipeline automatically preprocesses the features before generating predictions
###Code
from sklearn.metrics import mean_absolute_error
#bundle preprocessing and modelliing code in a pipeline
my_pipeline = Pipeline(steps=[
('preprocessor',preprocessor),
('model',model)
])
my_pipeline.fit(X_train,y_train)
preds = mypipeline.predict(X_valid)
score = mean_absolute_error(y_valid,preds)
print('MAE:',score)
###Output
_____no_output_____
###Markdown
- Pipelines are valuable for cleaning up machine learning code and avoiding errors, and are especially useful for workflows with sophisticated data preprocessing.- Also you can experiment with model parameters, numerical and categorical transformers to get the least MAE score **3. Generate test predictions**
###Code
# Preprocessing of test data, fit model
preds_test = my_pipeline.predict(X_test) # Your code here
# Save test predictions to file
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
###Output
_____no_output_____
###Markdown
5. Cross validation  What is cross-validation?- In cross-validation, we run our modeling process on different subsets of the data to get multiple measures of model quality.- For example, we could begin by dividing the data into 5 pieces, each 20% of the full dataset. In this case, we say that we have broken the data into 5 "folds" - In Experiment 1, we use the first fold as a validation (or holdout) set and everything else as training data. This gives us a measure of model quality based on a 20% holdout set.- In Experiment 2, we hold out data from the second fold (and use everything except the second fold for training the model). The holdout set is then used to get a second estimate of model quality.and so on Cross-validation gives a more accurate measure of model quality.However it can take long time to runtradeoff?- For small datasets(less than 2 min to run), where extra computational burden isn't a big deal, you should run cross-validation.- For larger datasets, a single validation set is sufficient. Your code will run faster, and you may have enough data that there's little need to re-use some of it for holdout.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[('preprocessor',SimpleImputer()),
('model',RandomForestRegressor(n_estimators=50,random_state=0))
])
from sklearn.metrics import cross_val_score
scores = -1*cross_val_score(my_pipeline, X,y,cv=5,scoring='neg_mean_absolute_error')
print('MAE scores:\n',scores)
###Output
_____no_output_____
###Markdown
**We can create a function to test out different n_estimators value to find the best**
###Code
def get_score(n_estimators):
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators, random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
# test for different n value
results = {}
n = [50,100,150,200,250,300,350,400]
for i in n:
results[i] = get_score(i)
#plot n_estimators
results = {}
n = [50,100,150,200,250,300,350,400]
for i in n:
results[i] = get_score(i)
###Output
_____no_output_____
###Markdown
using cv best estimator was found to be 200although upon submission best still was 250(which was found during pipeline stage) It has been suggested to make use of GridSearchCV() to find the best parameter 6. XGBoostgradient boosting many of the kaggle competition and acheives state of the art results.It is an ensemble method which goes through cycle and iteratively adds models into an ensemble.  Steps - - first we add a naive model, and make predictions- then we calculate the loss using a loss function(eg - mean squared loss)- then we use loss function to parameter tune another model and reduce the loss function- then we add the new model to the ensemble,make predictions- repeat the process XGboost - Extreme gradient boosting - implementation of gradient boosting wih several additional features focused on preformance and speed
###Code
from xgboost import XGBRegressor
my_model = XGBRegressor()
my_model.fit(X_train, y_train)
from sklearn.metrics import mean_absolute_error
predictions = my_model.predict(X_valid)
print("Mean Absolute Error: " + str(mean_absolute_error(predictions, y_valid)))
###Output
_____no_output_____
###Markdown
**parameter tuning**n_estimators - number of cycles/number of models- too low causes underfitting- too high causes overfitting- usual value range - (100-1000) early_stopping_rounds * **important** *early_stopping_rounds offers a way to automatically find the ideal value for n_estimators- When using early_stopping_rounds, you also need to set aside some data for calculating the validation scores - this is done by setting the eval_set parameter.- Setting early_stopping_rounds=5 is a reasonable choice. In this case, we stop after 5 straight rounds of deteriorating validation scores.- If you later want to fit a model with all of your data, set n_estimators to whatever value you found to be optimal when run with early stopping. learning rate- instead of getting predictions by simply adding up the predictions from each component model, we can multiply the predictions from each model by a small number (known as the learning rate) before adding them in.- This means each tree we add to the ensemble helps us less. So, we can set a higher value for n_estimators without overfitting. If we use early stopping, the appropriate number of trees will be determined automatically.- In general, a small learning rate and large number of estimators will yield more accurate XGBoost models, though it will also take the model longer to train since it does more iterations through the cycle. As default, XGBoost sets learning_rate=0.1. n_jobs- On larger datasets where runtime is a consideration, you can use parallelism to build your models faster. It's common to set the parameter n_jobs equal to the number of cores on your machine. On smaller datasets, this won't help.- The resulting model won't be any better, so micro-optimizing for fitting time is typically nothing but a distraction. But, it's useful in large datasets where you would otherwise spend a long time waiting during the fit command. **Code-**
###Code
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05, n_jobs=4)
my_model.fit(X_train, y_train,
early_stopping_rounds=5,
eval_set=[(X_valid, y_valid)],
verbose=False)
###Output
_____no_output_____
###Markdown
XGBoost is a the leading software library for working with standard tabular data (the type of data you store in Pandas DataFrames, as opposed to more exotic types of data like images and videos). With careful parameter tuning, you can train highly accurate models 7. Data leakage- Data leakage (or leakage) happens when your training data contains information about the target, but similar data will not be available when the model is used for prediction. This leads to high performance on the training set (and possibly even the validation data), but the model will perform poorly in production.- In other words, leakage causes a model to look accurate until you start making decisions with the model, and then the model becomes very inaccurate.There are 2 types - target leakage and train-test contamination **target leakage**Target leakage occurs when your predictors include data that will not be available at the time you make predictions.- It is important to think about target leakage in terms of the timing or chronological order that data becomes available, not merely whether a feature helps make good predictions.- **Think of it like this - If you do not have access to that feature when making a new prediction, then that feature shouldn't be there in the first place.**example- If the target variable is got_pnemonia(True/False) and there is column named took_antibiotic_medicine(True/False).- Antibiotic is taken after the patient is diognosed with pnemonia.- So if this model is deployed in real world, while doctors make predictions of whether patient got pnemonia or not, took_antibiotic_medicine field will still not be available, as this comes after the diagnosis is made.- To prevent this type of data leakage, any variable updated (or created) after the target value is realized should be excluded. **train_test contamination**- This occurs if validation data is corrupted,even in subtle ways, before splitting- For example, imagine you run preprocessing (like fitting an imputer for missing values) before calling train_test_split().- If your validation is based on a simple train-test split, exclude the validation data from any type of fitting, including the fitting of preprocessing steps.- This is easier if you use scikit-learn pipelines. - When using cross-validation, it's even more critical that you do your preprocessing inside the pipeline! **Example** - credit card acceptance
###Code
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
# Since there is no preprocessing, we don't need a pipeline (used anyway as best practice!)
my_pipeline = make_pipeline(RandomForestClassifier(n_estimators=100))
cv_scores = cross_val_score(my_pipeline, X, y,
cv=5,
scoring='accuracy')
print("Cross-validation accuracy: %f" % cv_scores.mean())
"""
output -
Cross-validation accuracy: 0.979525
"""
#data details -
"""
card: 1 if credit card application accepted, 0 if not
reports: Number of major derogatory reports
age: Age n years plus twelfths of a year
income: Yearly income (divided by 10,000)
share: Ratio of monthly credit card expenditure to yearly income
expenditure: Average monthly credit card expenditure
owner: 1 if owns home, 0 if rents
selfempl: 1 if self-employed, 0 if not
dependents: 1 + number of dependents
months: Months living at current address
majorcards: Number of major credit cards held
active: Number of active credit accounts
"""
###Output
_____no_output_____
###Markdown
A few variables look suspicious. For example, does expenditure mean expenditure on this card or on cards used before appying?At this point, basic data comparisons can be very helpful:
###Code
expenditures_cardholders = X.expenditure[y]
expenditures_noncardholders = X.expenditure[~y]
print('Fraction of those who did not receive a card and had no expenditures: %.2f' \
%((expenditures_noncardholders == 0).mean()))
print('Fraction of those who received a card and had no expenditures: %.2f' \
%(( expenditures_cardholders == 0).mean()))
"""
output -
Fraction of those who did not receive a card and had no expenditures: 1.00
Fraction of those who received a card and had no expenditures: 0.02
"""
###Output
_____no_output_____
###Markdown
- As shown above, everyone who did not receive a card had no expenditures, while only 2% of those who received a card had no expenditures. It's not surprising that our model appeared to have a high accuracy. But this also seems to be a case of target leakage, where expenditures probably means expenditures on the card they applied for.- Since share is partially determined by expenditure, it should be excluded too. The variables active and majorcards are a little less clear, but from the description, they sound concerning. In most situations, it's better to be safe than sorry if you can't track down the people who created the data to find out more.- We would run a model without target leakage as follows:
###Code
#Drop leaky predictors from dataset
potential_leaks = ['expenditure', 'share', 'active', 'majorcards']
X2 = X.drop(potential_leaks, axis=1)
# Evaluate the model with leaky predictors removed
cv_scores = cross_val_score(my_pipeline, X2, y,
cv=5,
scoring='accuracy')
print("Cross-val accuracy: %f" % cv_scores.mean())
"""
output -
Cross-val accuracy: 0.827139
"""
###Output
_____no_output_____ |
docs/notebooks/sdba.ipynb | ###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Adjustment algorithms all conform to the `train` - `adjust` scheme, formalized within `Adjustment` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object.
###Code
import numpy as np
import xarray as xr
import cftime
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range('2000-01-01', '2030-12-31', freq='D', calendar='noleap')
ref = xr.DataArray((-20 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.1 * (t - t[0]).days / 365), # "warming" of 1K per decade,
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
sim = xr.DataArray((-18 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.11 * (t - t[0]).days / 365), # "warming" of 1.1K per decade
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
ref = ref.sel(time=slice(None, '2015-01-01'))
hist = sim.sel(time=slice(None, '2015-01-01'))
ref.plot(label='Reference')
sim.plot(label='Model')
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time', kind='+')
QM.train(ref, hist)
scen = QM.adjust(sim, extrapolation='constant', interp='nearest')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
###Code
QM_mo = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM_mo.train(ref, hist)
scen = QM_mo.adjust(sim, extrapolation='constant', interp='linear')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper('time.dayofyear', window=31)
QM_doy = sdba.Scaling(group=group, kind='+')
QM_doy.train(ref, hist)
scen = QM_doy.adjust(sim)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
sim
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating the adjustment object `Adj = Adjustment(**kwargs)` (from `xclim.sdba.adjustment`)- training `Adj.train(obs, sim)`- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals/ 100, vals)) / 3e6
vals_sim = (1 + 0.1 * np.random.random_sample((t.size,))) * (4 ** np.where(vals < 9.5, vals/ 100, vals)) / 3e6
pr_ref = xr.DataArray(vals_ref, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_ref = pr_ref.sel(time=slice('2000', '2015'))
pr_sim = xr.DataArray(vals_sim, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_hist = pr_sim.sel(time=slice('2000', '2015'))
pr_ref.plot(alpha=0.9, label='Reference')
pr_sim.plot(alpha=0.7, label='Model')
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM.train(pr_ref, pr_hist)
scen = QM.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_hist.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).Here we have our first encounter with a processing function requiring a _Dataset_ instead of individual DataArrays, like the adjustment methods. This is due to a powerful but complex optimization within xclim where most functions acting on groups are wrapped with xarray's [`map_blocks`](http://xarray.pydata.org/en/stable/generated/xarray.map_blocks.htmlxarray.map_blocks). It is not necessary to understand the way this works to use xclim, but be aware that most functions in `sdba.processing` will require Dataset inputs and specific variable names, which are explicited in their docstrings. Also, their signature might look strange, trust the docstring.The adjustment methods use the same optimization, but it is hidden under-the-hood. More is said about this in the [advanced notebook](sdba-advanced.ipynb).
###Code
# 2nd try with adapt_freq
ds_ad = sdba.processing.adapt_freq(xr.Dataset(dict(sim=pr_hist, ref=pr_ref, thresh=0.05)), group='time')
QM_ad = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM_ad.train(pr_ref, ds_ad.sim_ad)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_sim.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen_ad.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper('time.dayofyear', window=15)
Sca = sdba.Scaling(group=doy_win31, kind='+')
Sca.train(ref, hist)
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group='time.dayofyear', kind='+')
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n = sdba.processing.normalize(ref.rename('data').to_dataset(), group=doy_win31, kind='+').data
hist_n = sdba.processing.normalize(hist.rename('data').to_dataset(), group=doy_win31, kind='+').data
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM.train(ref_n, hist_n)
scen_detrended = QM.adjust(sim_detrended, extrapolation='constant', interp='nearest')
scen = sim_fit.retrend(scen_detrended)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
sim.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension.Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents(group="time", crd_dims=['lon'])
PCA.train(reft, simt)
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping(group='time', nquantiles=50, kind='+')
EQM.train(reft, scen1)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label='Reference')
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label='Simulation')
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label='Adjusted - PCA+EQM')
axPCA.set_xlabel('Point 1')
axPCA.set_ylabel('Point 2')
axPCA.set_title('PC-space')
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim='time')
simQ = simt.quantile(EQM.ds.quantiles, dim='time')
scen1Q = scen1.quantile(EQM.ds.quantiles, dim='time')
scen2Q = scen2.quantile(EQM.ds.quantiles, dim='time')
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label='No adj')
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label='PCA')
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label='PCA+EQM')
axQM.plot(refQ.isel(lon=i), refQ.isel(lon=i), color='k', linestyle=':', label='Ideal')
axQM.set_title(f'QQ plot - Point {i + 1}')
axQM.set_xlabel('Reference')
axQM.set_xlabel('Model')
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label='Reference')
simt.isel(lon=0).plot(ax=axT, label='Unadjusted sim')
#scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label='PCA+EQM')
axT.legend()
axT.set_title('Timeseries - Point 1')
###Output
_____no_output_____
###Markdown
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.In the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.
###Code
from xclim.testing import open_dataset
from xclim.core.units import convert_units_to
dref = open_dataset('sdba/ahccd_1950-2013.nc', chunks={'location': 1}, drop_variables=['lat', 'lon']).sel(time=slice('1981', '2010'))
dref = dref.assign(
tasmax=convert_units_to(dref.tasmax, 'K'),
pr=convert_units_to(dref.pr, 'kg m-2 s-1')
)
dsim = open_dataset('sdba/CanESM2_1950-2100.nc', chunks={'location': 1}, drop_variables=['lat', 'lon'])
dhist = dsim.sel(time=slice('1981', '2010'))
dsim = dsim.sel(time=slice('2041', '2070'))
dref
###Output
_____no_output_____
###Markdown
Perform an initial univariate adjustment.
###Code
# additive for tasmax
QDMtx = sdba.QuantileDeltaMapping(nquantiles=20, kind='+', group='time')
QDMtx.train(dref.tasmax, dhist.tasmax)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_tx = QDMtx.adjust(dhist.tasmax)
scens_tx = QDMtx.adjust(dsim.tasmax)
# remove == 0 values in pr:
dref['pr'] = sdba.processing.jitter_under_thresh(dref.pr, 1e-5)
dhist['pr'] = sdba.processing.jitter_under_thresh(dhist.pr, 1e-5)
dsim['pr'] = sdba.processing.jitter_under_thresh(dsim.pr, 1e-5)
# multiplicative for pr
QDMpr = sdba.QuantileDeltaMapping(nquantiles=20, kind='*', group='time')
QDMpr.train(dref.pr, dhist.pr)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_pr = QDMpr.adjust(dhist.pr)
scens_pr = QDMpr.adjust(dsim.pr)
scenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr))
scens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))
###Output
_____no_output_____
###Markdown
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.
###Code
# Stack the variables (tasmax and pr)
ref = sdba.base.stack_variables(dref)
scenh = sdba.base.stack_variables(scenh)
scens = sdba.base.stack_variables(scens)
# Standardize
ref, _, _ = sdba.processing.standardize(ref)
allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), 'time'))
hist = allsim.sel(time=scenh.time)
sim = allsim.sel(time=scens.time)
###Output
_____no_output_____
###Markdown
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.
###Code
from xclim import set_options
NpdfT = sdba.adjustment.NpdfTransform(
base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.
base_kws={'nquantiles': 20, 'group': 'time'},
n_iter=20, # perform 20 iteration
n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)
)
# See the advanced notebook for details on how this option work
with set_options(sdba_extra_output=True):
hist, sim, extra = NpdfT.train_adjust(ref, hist, sim)
###Output
_____no_output_____
###Markdown
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` is one of those function that need a dataset as input, instead of taking multiple arrays. The call sequence looks a bit clumsy: 'sim' is the argument to reorder, 'ref' the argument that provides the order.
###Code
scenh = sdba.processing.reordering(scenh, hist, group='time')
scens = sdba.processing.reordering(scens, sim, group='time')
scenh = sdba.base.unstack_variables(scenh, 'variables')
scens = sdba.base.unstack_variables(scens, 'variables')
###Output
_____no_output_____
###Markdown
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.
###Code
from dask import compute
from dask.diagnostics import ProgressBar
tasks = [
scenh.isel(location=2).to_netcdf('mbcn_scen_hist_loc2.nc', compute=False),
scens.isel(location=2).to_netcdf('mbcn_scen_sim_loc2.nc', compute=False),
extra.escores.isel(location=2).to_dataset().to_netcdf('mbcn_escores_loc2.nc', compute=False)
]
with ProgressBar():
compute(tasks)
###Output
_____no_output_____
###Markdown
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.
###Code
scenh = xr.open_dataset('mbcn_scen_hist_loc2.nc')
fig, ax = plt.subplots()
dref.isel(location=2).tasmax.plot(ax=ax, label='Reference')
scenh.tasmax.plot(ax=ax, label='Adjusted', alpha=0.65)
dhist.isel(location=2).tasmax.plot(ax=ax, label='Simulated')
ax.legend()
escores = xr.open_dataarray('mbcn_escores_loc2.nc')
diff_escore = escores.differentiate('iterations')
diff_escore.plot()
plt.title('Difference of the subsequent e-scores.')
plt.ylabel('E-scores difference')
assert all(diff_escore < 0.2) # this is for testing, please ignore
###Output
_____no_output_____
###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.This presents examples, while a bit more info and the API are given on [this page](../sdba.rst).A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object. The object is created through the `.train` method of the class, and the simulation is adjusted with `.adjust`.
###Code
import numpy as np
import xarray as xr
import cftime
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use("seaborn")
plt.rcParams["figure.figsize"] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range("2000-01-01", "2030-12-31", freq="D", calendar="noleap")
ref = xr.DataArray(
(
-20 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.1 * (t - t[0]).days / 365
), # "warming" of 1K per decade,
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
sim = xr.DataArray(
(
-18 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.11 * (t - t[0]).days / 365
), # "warming" of 1.1K per decade
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
ref = ref.sel(time=slice(None, "2015-01-01"))
hist = sim.sel(time=slice(None, "2015-01-01"))
ref.plot(label="Reference")
sim.plot(label="Model")
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time", kind="+"
)
scen = QM.adjust(sim, extrapolation="constant", interp="nearest")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
###Code
QM_mo = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time.month", kind="+"
)
scen = QM_mo.adjust(sim, extrapolation="constant", interp="linear")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper("time.dayofyear", window=31)
QM_doy = sdba.Scaling.train(ref, hist, group=group, kind="+")
scen = QM_doy.adjust(sim)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
sim
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating and training the adjustment object `Adj = Adjustment.train(obs, hist, **kwargs)` (from `xclim.sdba.adjustment`)- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6
vals_sim = (
(1 + 0.1 * np.random.random_sample((t.size,)))
* (4 ** np.where(vals < 9.5, vals / 100, vals))
/ 3e6
)
pr_ref = xr.DataArray(
vals_ref, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_ref = pr_ref.sel(time=slice("2000", "2015"))
pr_sim = xr.DataArray(
vals_sim, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_hist = pr_sim.sel(time=slice("2000", "2015"))
pr_ref.plot(alpha=0.9, label="Reference")
pr_sim.plot(alpha=0.7, label="Model")
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping.train(
pr_ref, pr_hist, nquantiles=15, kind="*", group="time"
)
scen = QM.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_hist.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).
###Code
# 2nd try with adapt_freq
sim_ad, pth, dP0 = sdba.processing.adapt_freq(
pr_ref, pr_sim, thresh="0.05 mm d-1", group="time"
)
QM_ad = sdba.EmpiricalQuantileMapping.train(
pr_ref, sim_ad, nquantiles=15, kind="*", group="time"
)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_sim.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen_ad.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper("time.dayofyear", window=15)
Sca = sdba.Scaling.train(ref, hist, group=doy_win31, kind="+")
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group="time.dayofyear", kind="+")
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n = sdba.processing.normalize(ref, group=doy_win31, kind="+")
hist_n = sdba.processing.normalize(hist, group=doy_win31, kind="+")
QM = sdba.EmpiricalQuantileMapping.train(
ref_n, hist_n, nquantiles=15, group="time.month", kind="+"
)
scen_detrended = QM.adjust(sim_detrended, extrapolation="constant", interp="nearest")
scen = sim_fit.retrend(scen_detrended)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
sim.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension.Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents.train(reft, simt, group="time", crd_dims=["lon"])
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping.train(
reft, scen1, group="time", nquantiles=50, kind="+"
)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label="Reference")
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label="Simulation")
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label="Adjusted - PCA+EQM")
axPCA.set_xlabel("Point 1")
axPCA.set_ylabel("Point 2")
axPCA.set_title("PC-space")
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim="time")
simQ = simt.quantile(EQM.ds.quantiles, dim="time")
scen1Q = scen1.quantile(EQM.ds.quantiles, dim="time")
scen2Q = scen2.quantile(EQM.ds.quantiles, dim="time")
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label="No adj")
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label="PCA")
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label="PCA+EQM")
axQM.plot(
refQ.isel(lon=i), refQ.isel(lon=i), color="k", linestyle=":", label="Ideal"
)
axQM.set_title(f"QQ plot - Point {i + 1}")
axQM.set_xlabel("Reference")
axQM.set_xlabel("Model")
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label="Reference")
simt.isel(lon=0).plot(ax=axT, label="Unadjusted sim")
# scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label="PCA+EQM")
axT.legend()
axT.set_title("Timeseries - Point 1")
###Output
_____no_output_____
###Markdown
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.In the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.
###Code
from xclim.testing import open_dataset
from xclim.core.units import convert_units_to
dref = open_dataset(
"sdba/ahccd_1950-2013.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
).sel(time=slice("1981", "2010"))
dref = dref.assign(
tasmax=convert_units_to(dref.tasmax, "K"),
pr=convert_units_to(dref.pr, "kg m-2 s-1"),
)
dsim = open_dataset(
"sdba/CanESM2_1950-2100.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
)
dhist = dsim.sel(time=slice("1981", "2010"))
dsim = dsim.sel(time=slice("2041", "2070"))
dref
###Output
_____no_output_____
###Markdown
Perform an initial univariate adjustment.
###Code
# additive for tasmax
QDMtx = sdba.QuantileDeltaMapping.train(
dref.tasmax, dhist.tasmax, nquantiles=20, kind="+", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_tx = QDMtx.adjust(dhist.tasmax)
scens_tx = QDMtx.adjust(dsim.tasmax)
# remove == 0 values in pr:
dref["pr"] = sdba.processing.jitter_under_thresh(dref.pr, "0.01 mm d-1")
dhist["pr"] = sdba.processing.jitter_under_thresh(dhist.pr, "0.01 mm d-1")
dsim["pr"] = sdba.processing.jitter_under_thresh(dsim.pr, "0.01 mm d-1")
# multiplicative for pr
QDMpr = sdba.QuantileDeltaMapping.train(
dref.pr, dhist.pr, nquantiles=20, kind="*", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_pr = QDMpr.adjust(dhist.pr)
scens_pr = QDMpr.adjust(dsim.pr)
scenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr))
scens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))
###Output
_____no_output_____
###Markdown
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.
###Code
# Stack the variables (tasmax and pr)
ref = sdba.processing.stack_variables(dref)
scenh = sdba.processing.stack_variables(scenh)
scens = sdba.processing.stack_variables(scens)
# Standardize
ref, _, _ = sdba.processing.standardize(ref)
allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), "time"))
hist = allsim.sel(time=scenh.time)
sim = allsim.sel(time=scens.time)
###Output
_____no_output_____
###Markdown
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.
###Code
from xclim import set_options
# See the advanced notebook for details on how this option work
with set_options(sdba_extra_output=True):
out = sdba.adjustment.NpdfTransform.adjust(
ref,
hist,
sim,
base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.
base_kws={"nquantiles": 20, "group": "time"},
n_iter=20, # perform 20 iteration
n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)
)
scenh = out.scenh.rename(time_hist="time") # Bias-adjusted historical period
scens = out.scen # Bias-adjusted future period
extra = out.drop_vars(["scenh", "scen"])
# Un-standardize (add the mean and the std back)
scenh = sdba.processing.unstandardize(scenh, savg, sstd)
scens = sdba.processing.unstandardize(scens, savg, sstd)
###Output
_____no_output_____
###Markdown
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` : 'ref' the argument that provides the order, 'sim' is the argument to reorder.
###Code
scenh = sdba.processing.reordering(hist, scenh, group="time")
scens = sdba.processing.reordering(sim, scens, group="time")
scenh = sdba.processing.unstack_variables(scenh, "variables")
scens = sdba.processing.unstack_variables(scens, "variables")
###Output
_____no_output_____
###Markdown
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.
###Code
from dask import compute
from dask.diagnostics import ProgressBar
tasks = [
scenh.isel(location=2).to_netcdf("mbcn_scen_hist_loc2.nc", compute=False),
scens.isel(location=2).to_netcdf("mbcn_scen_sim_loc2.nc", compute=False),
extra.escores.isel(location=2)
.to_dataset()
.to_netcdf("mbcn_escores_loc2.nc", compute=False),
]
with ProgressBar():
compute(tasks)
###Output
_____no_output_____
###Markdown
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.
###Code
scenh = xr.open_dataset("mbcn_scen_hist_loc2.nc")
fig, ax = plt.subplots()
dref.isel(location=2).tasmax.plot(ax=ax, label="Reference")
scenh.tasmax.plot(ax=ax, label="Adjusted", alpha=0.65)
dhist.isel(location=2).tasmax.plot(ax=ax, label="Simulated")
ax.legend()
escores = xr.open_dataarray("mbcn_escores_loc2.nc")
diff_escore = escores.differentiate("iterations")
diff_escore.plot()
plt.title("Difference of the subsequent e-scores.")
plt.ylabel("E-scores difference")
diff_escore
###Output
_____no_output_____
###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Adjustment algorithms all conform to the `train` - `adjust` scheme, formalized within `Adjustment` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object.
###Code
import netCDF4 # Needed for scipy.io.netcdf
import numpy as np
import xarray as xr
import cftime
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range('2000-01-01', '2030-12-31', freq='D', calendar='noleap')
ref = xr.DataArray((-20 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.1 * (t - t[0]).days / 365), # "warming" of 1K per decade,
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
sim = xr.DataArray((-18 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.11 * (t - t[0]).days / 365), # "warming" of 1.1K per decade
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
ref = ref.sel(time=slice(None, '2015-01-01'))
hist = sim.sel(time=slice(None, '2015-01-01'))
ref.plot(label='Reference')
sim.plot(label='Model')
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time', kind='+')
QM.train(ref, hist)
scen = QM.adjust(sim, extrapolation='constant', interp='nearest')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may. This option is currently only implemented for monthly grouping.
###Code
QM_mo = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM_mo.train(ref, hist)
scen = QM_mo.adjust(sim, extrapolation='constant', interp='linear')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, one can pass a :py:class:`xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper('time.dayofyear', window=31)
QM_doy = sdba.Scaling(group=group, kind='+')
QM_doy.train(ref, hist)
scen = QM_doy.adjust(sim)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating the adjustment object `Adj = Adjustment(**kwargs)` (from `xclim.sdba.adjustment`)- training `Adj.train(obs, sim)`- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and always has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals/ 100, vals)) / 3e6
vals_sim = (1 + 0.1 * np.random.random_sample((t.size,))) * (4 ** np.where(vals < 9.5, vals/ 100, vals)) / 3e6
pr_ref = xr.DataArray(vals_ref, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_ref = pr_ref.sel(time=slice('2000', '2015'))
pr_sim = xr.DataArray(vals_sim, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_hist = pr_sim.sel(time=slice('2000', '2015'))
pr_ref.plot(alpha=0.9, label='Reference')
pr_sim.plot(alpha=0.7, label='Model')
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM.train(pr_ref, pr_hist)
scen = QM.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_hist.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).
###Code
# 2nd try with adapt_freq
ds_ad = sdba.processing.adapt_freq(sim=pr_hist, ref=pr_ref, thresh=0.05)
QM_ad = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM_ad.train(pr_ref, ds_ad.sim_ad)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_sim.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen_ad.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper('time.dayofyear', window=15)
Sca = sdba.Scaling(group=doy_win31, kind='+')
Sca.train(ref, hist)
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group='time.dayofyear', kind='+')
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n = sdba.processing.normalize(ref, group=doy_win31, kind='+')
hist_n = sdba.processing.normalize(hist, group=doy_win31, kind='+')
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM.train(ref_n, hist_n)
scen_detrended = QM.adjust(sim_detrended, extrapolation='constant', interp='nearest')
scen = sim_fit.retrend(scen_detrended)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
sim.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents(group="time", crd_dims=['lon'])
PCA.train(reft, simt)
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping(group='time', nquantiles=50, kind='+')
EQM.train(reft, scen1)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label='Reference')
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label='Simulation')
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label='Adjusted - PCA+EQM')
axPCA.set_xlabel('Point 1')
axPCA.set_ylabel('Point 2')
axPCA.set_title('PC-space')
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim='time')
simQ = simt.quantile(EQM.ds.quantiles, dim='time')
scen1Q = scen1.quantile(EQM.ds.quantiles, dim='time')
scen2Q = scen2.quantile(EQM.ds.quantiles, dim='time')
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label='No adj')
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label='PCA')
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label='PCA+EQM')
axQM.plot(refQ.isel(lon=i), refQ.isel(lon=i), color='k', linestyle=':', label='Ideal')
axQM.set_title(f'QQ plot - Point {i + 1}')
axQM.set_xlabel('Reference')
axQM.set_xlabel('Model')
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label='Reference')
simt.isel(lon=0).plot(ax=axT, label='Unadjusted sim')
#scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label='PCA+EQM')
axT.legend()
axT.set_title('Timeseries - Point 1')
###Output
_____no_output_____
###Markdown
Optimization with daskAdjustment process can be very heavy as they are made of large amounts of small operation and often needs to be computed over large regions. Using small groupings (like `time.dayofyear`) adds precision and robustness, but also decuplates the load and computing complexity. A good first read on this are xarray's [Optimization tips](http://xarray.pydata.org/en/stable/dask.htmloptimization-tips).Some additionnal tips:* When saving a file with `to_netcdf`, setting option `unlimited_dims` with a list of dimension names can force xarray to write the data using chunks on disk, instead of one contiguous array. Command-line tool `ncdump -sh` can give information on how those chunks are oriented on disk and the optimal `chunks={...}` choice can be inferred.* Most adjustment method will need to perform operation on the whole `time` coordinate, so it is best to optimize chunking along the other dimensions.* One of the main bottleneck for adjustments with small groups is that dask needs to build and optimize an enormous task graph. In order to ease that process and reduce the number of recalculations, given that the training dataset fits in memory, one could call `Adjustment.ds.load()` to trigger the computation and store the result as a `np.array`. For very large tasks, one could write the training dataset to disk and then reload them into the `Adjustment` object.* Consider using `engine="h5netcdf"` in `open_[mf]dataset` when possible. Compatibility of a file can be found using the `is_hdf5()` method of the `h5py` module. ExampleThe following script is an example of some methods useful to improve performances of a simple detrended quantile mapping adjustment. As we are using the sample data of previous example, dask isn't even needed here, and performance of this cell cannot be compared with what would happen on very large datasets.
###Code
ref.name = 'tas'
ref = ref.expand_dims(lon=[50, 60, 70, 80, 90, 100])
ref.to_dataset().to_netcdf('reference_data.nc', unlimited_dims=['lon'])
sim.name = 'tas'
sim = sim.expand_dims(lon=[50, 60, 70, 80, 90, 100])
sim.to_dataset().to_netcdf('simulation_data.nc', unlimited_dims=['lon'])
var = "tas"
kind = '+'
ref_period = slice('2000', '2015')
sim_period = slice('2000', '2030')
file_ref = 'reference_data.nc'
file_sim = 'simulation_data.nc'
# We want the robustness of a dayofyear adjustment, but the speed of the monthly computation.
# The compromise is to split the normalization process from the quantile mapping. Smaller groups in normalization
# reduce the boundary effects between months but the monthly quantile mapping is almost as precise in that case.
# This is a scientific decision that should only be taken after careful analysis of the data, it is shown here as
# an example of compromise aimed at accelerating the computation.
g_norm = sdba.Grouper(group='time.dayofyear', window=31)
g_qm = sdba.Grouper(group='time.month')
# # Step 1 : Normalize hist and ref
maxlon = 6
nlon = 2
# Normalize (as resample() or groupby()) generates so many small operations that dask struggles
# to even start the computation. Here, with large data, it is more efficient to loop of the
# chunks and to **not** use dask. In the following two loops, "chunks=" is not assigned, so
# all data is loaded ad basic numpy operations are used.
for ilon in range(0, maxlon, nlon):
# Using a `with` statement when opening a file automatically closes it when we exit the context.
# As multiple open files can be sources of bugs, this helps the coding process
with xr.open_dataset(file_ref)[var].isel(lon=slice(ilon, ilon + nlon)) as ref:
da = sdba.processing.normalize(ref, group=g_norm, kind=kind)
da.name = var
da.to_netcdf(f"mydqm_{ilon:03d}_refn.nc", unlimited_dims=['lon'])
# Hist
for ilon in range(0, maxlon, nlon):
with xr.open_dataset(file_sim)[var].isel(lon=slice(ilon, ilon + nlon)) as sim:
da = sdba.processing.normalize(sim.sel(time=ref_period), group=g_norm, kind=kind)
da.name = var
da.to_netcdf(f"mydqm_{ilon:03d}_histn.nc", unlimited_dims=['lon'])
# reopen the files
# Here we specify the same chunking for all files, but different values could be used
# depending on the operations or the data.
ref = xr.open_dataset(file_ref, chunks={'lon': 2})[var]
hist = xr.open_dataset(file_sim, chunks={'lon': 2})[var].sel(time=ref_period)
refn = xr.open_mfdataset("mydqm_*_refn.nc", combine='by_coords')[var]
histn = xr.open_mfdataset("mydqm_*_histn.nc", combine='by_coords')[var]
# Step 2 - Empirical Quantile Mapping using the normalized data
EQM = sdba.EmpiricalQuantileMapping(nquantiles=50, kind=kind, group=g_qm)
EQM.train(refn, histn)
mu_ref = g_qm.apply("mean", ref)
mu_hist = g_qm.apply("mean", hist)
# EQM.ds is simply a dataset, it can be edited in place.
EQM.ds["scaling"] = sdba.utils.get_correction(mu_hist, mu_ref, '+')
EQM.ds.scaling.attrs.update(
standard_name="Scaling factor",
description="Scaling factor making the mean of hist match the one of hist.",
)
# We trigger the training dataset computations, it divides the workload.
EQM.ds.load()
# Step 3 - Normalize and scale sim
with xr.open_dataset(file_sim, chunks={'lon': 2})[var] as sim:
sim_scl = sdba.utils.apply_correction(
sim,
sdba.utils.broadcast(EQM.ds.scaling, sim, group=g_qm, interp='linear'),
kind=kind
)
# For faster computation and as it makes it identical to hist, we normalize sim only with the reference period.
sim_norm = g_norm.apply("mean", sim_scl.sel(time=ref_period))
sim_anom = sdba.utils.apply_correction(
sim_scl,
sdba.utils.broadcast(sdba.utils.invert(sim_norm, kind=kind), sim_scl, group=g_norm, interp='nearest'),
kind=kind
)
xr.Dataset(data_vars={'norm': sim_norm, 'sim': sim_anom}).to_netcdf("dqm_simn.nc", unlimited_dims=['lon'])
# Step 4 - Detrending
# Detrending is one of the heaviest operations so the trend is saved in its own step.
ds_simn = xr.open_dataset("dqm_simn.nc", chunks={'lon': 2})
polyfit = sdba.detrending.PolyDetrend(group=g_qm, degree=2)
sim_fit = polyfit.fit(ds_simn.sim)
trend = sim_fit.get_trend(ds_simn.sim)
trend.name = 'trend'
trend.to_dataset().to_netcdf("dqm_sim_trend.nc", unlimited_dims=['lon'])
# Step 5 - Adjustment
with xr.open_dataset("dqm_sim_trend.nc", chunks={'lon': 2}) as ds_trend:
# With the trend already computed, the "private" versions of detrend and retrend
# can be used to skip the trend computation.
sim_det = sim_fit._detrend(ds_simn.sim, ds_trend.trend)
sim_det.name = 'sim'
scen_det_anom = EQM.adjust(sim_det, interp='linear', extrapolation='constant')
scen_anom = sim_fit._retrend(scen_det_anom, ds_trend.trend)
scen = sdba.utils.apply_correction(
scen_anom,
sdba.utils.broadcast(ds_simn.norm, scen_anom, group=g_norm, interp='nearest'),
kind=kind
)
scen.name = var
scen.to_netcdf("dqm_scen.nc", unlimited_dims=['lon'])
# Cleanup
ref.close()
hist.close()
refn.close()
histn.close()
ds_simn.close()
###Output
_____no_output_____
###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Adjustment algorithms all conform to the `train` - `adjust` scheme, formalized within `Adjustment` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object.
###Code
import numpy as np
import xarray as xr
import cftime
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range('2000-01-01', '2030-12-31', freq='D', calendar='noleap')
ref = xr.DataArray((-20 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.1 * (t - t[0]).days / 365), # "warming" of 1K per decade,
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
sim = xr.DataArray((-18 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15
+ 0.11 * (t - t[0]).days / 365), # "warming" of 1.1K per decade
dims=('time',), coords={'time': t}, attrs={'units': 'K'})
ref = ref.sel(time=slice(None, '2015-01-01'))
hist = sim.sel(time=slice(None, '2015-01-01'))
ref.plot(label='Reference')
sim.plot(label='Model')
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time', kind='+')
QM.train(ref, hist)
scen = QM.adjust(sim, extrapolation='constant', interp='nearest')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
###Code
QM_mo = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM_mo.train(ref, hist)
scen = QM_mo.adjust(sim, extrapolation='constant', interp='linear')
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper('time.dayofyear', window=31)
QM_doy = sdba.Scaling(group=group, kind='+')
QM_doy.train(ref, hist)
scen = QM_doy.adjust(sim)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
hist.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
sim
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating the adjustment object `Adj = Adjustment(**kwargs)` (from `xclim.sdba.adjustment`)- training `Adj.train(obs, sim)`- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals/ 100, vals)) / 3e6
vals_sim = (1 + 0.1 * np.random.random_sample((t.size,))) * (4 ** np.where(vals < 9.5, vals/ 100, vals)) / 3e6
pr_ref = xr.DataArray(vals_ref, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_ref = pr_ref.sel(time=slice('2000', '2015'))
pr_sim = xr.DataArray(vals_sim, coords={"time": t}, dims=("time",), attrs={'units': 'mm/day'})
pr_hist = pr_sim.sel(time=slice('2000', '2015'))
pr_ref.plot(alpha=0.9, label='Reference')
pr_sim.plot(alpha=0.7, label='Model')
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM.train(pr_ref, pr_hist)
scen = QM.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_hist.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).Here we have our first encounter with a processing function requiring a _Dataset_ instead of individual DataArrays, like the adjustment methods. This is due to a powerful but complex optimization within xclim where most functions acting on groups are wrapped with xarray's [`map_blocks`](http://xarray.pydata.org/en/stable/generated/xarray.map_blocks.htmlxarray.map_blocks). It is not necessary to understand the way this works to use xclim, but be aware that most functions in `sdba.processing` will require Dataset inputs and specific variable names, which are explicited in their docstrings. Also, their signature might look strange, trust the docstring.The adjustment methods use the same optimization, but it is hidden under-the-hood. More is said about this in the [advanced notebook](sdba-advanced.ipynb).
###Code
# 2nd try with adapt_freq
ds_ad = sdba.processing.adapt_freq(xr.Dataset(dict(sim=pr_hist, ref=pr_ref, thresh=0.05)), group='time')
QM_ad = sdba.EmpiricalQuantileMapping(nquantiles=15, kind='*', group='time')
QM_ad.train(pr_ref, ds_ad.sim_ad)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time='2010').plot(alpha=0.9, label='Reference')
pr_sim.sel(time='2010').plot(alpha=0.7, label='Model - biased')
scen_ad.sel(time='2010').plot(alpha=0.6, label='Model - adjusted')
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper('time.dayofyear', window=15)
Sca = sdba.Scaling(group=doy_win31, kind='+')
Sca.train(ref, hist)
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group='time.dayofyear', kind='+')
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n = sdba.processing.normalize(ref.rename('data').to_dataset(), group=doy_win31, kind='+').data
hist_n = sdba.processing.normalize(hist.rename('data').to_dataset(), group=doy_win31, kind='+').data
QM = sdba.EmpiricalQuantileMapping(nquantiles=15, group='time.month', kind='+')
QM.train(ref_n, hist_n)
scen_detrended = QM.adjust(sim_detrended, extrapolation='constant', interp='nearest')
scen = sim_fit.retrend(scen_detrended)
ref.groupby('time.dayofyear').mean().plot(label='Reference')
sim.groupby('time.dayofyear').mean().plot(label='Model - biased')
scen.sel(time=slice('2000', '2015')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2000-15', linestyle='--')
scen.sel(time=slice('2015', '2030')).groupby('time.dayofyear').mean().plot(label='Model - adjusted - 2015-30', linestyle='--')
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension.Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents(group="time", crd_dims=['lon'])
PCA.train(reft, simt)
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping(group='time', nquantiles=50, kind='+')
EQM.train(reft, scen1)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label='Reference')
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label='Simulation')
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label='Adjusted - PCA+EQM')
axPCA.set_xlabel('Point 1')
axPCA.set_ylabel('Point 2')
axPCA.set_title('PC-space')
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim='time')
simQ = simt.quantile(EQM.ds.quantiles, dim='time')
scen1Q = scen1.quantile(EQM.ds.quantiles, dim='time')
scen2Q = scen2.quantile(EQM.ds.quantiles, dim='time')
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label='No adj')
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label='PCA')
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label='PCA+EQM')
axQM.plot(refQ.isel(lon=i), refQ.isel(lon=i), color='k', linestyle=':', label='Ideal')
axQM.set_title(f'QQ plot - Point {i + 1}')
axQM.set_xlabel('Reference')
axQM.set_xlabel('Model')
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label='Reference')
simt.isel(lon=0).plot(ax=axT, label='Unadjusted sim')
#scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label='PCA+EQM')
axT.legend()
axT.set_title('Timeseries - Point 1')
###Output
_____no_output_____
###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object. The object is created through the `.train` method of the class, and the simulation is adjusted with `.adjust`.
###Code
import numpy as np
import xarray as xr
import cftime
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use("seaborn")
plt.rcParams["figure.figsize"] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range("2000-01-01", "2030-12-31", freq="D", calendar="noleap")
ref = xr.DataArray(
(
-20 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.1 * (t - t[0]).days / 365
), # "warming" of 1K per decade,
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
sim = xr.DataArray(
(
-18 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.11 * (t - t[0]).days / 365
), # "warming" of 1.1K per decade
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
ref = ref.sel(time=slice(None, "2015-01-01"))
hist = sim.sel(time=slice(None, "2015-01-01"))
ref.plot(label="Reference")
sim.plot(label="Model")
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time", kind="+"
)
scen = QM.adjust(sim, extrapolation="constant", interp="nearest")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
###Code
QM_mo = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time.month", kind="+"
)
scen = QM_mo.adjust(sim, extrapolation="constant", interp="linear")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper("time.dayofyear", window=31)
QM_doy = sdba.Scaling.train(ref, hist, group=group, kind="+")
scen = QM_doy.adjust(sim)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
sim
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating and training the adjustment object `Adj = Adjustment.train(obs, hist, **kwargs)` (from `xclim.sdba.adjustment`)- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6
vals_sim = (
(1 + 0.1 * np.random.random_sample((t.size,)))
* (4 ** np.where(vals < 9.5, vals / 100, vals))
/ 3e6
)
pr_ref = xr.DataArray(
vals_ref, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_ref = pr_ref.sel(time=slice("2000", "2015"))
pr_sim = xr.DataArray(
vals_sim, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_hist = pr_sim.sel(time=slice("2000", "2015"))
pr_ref.plot(alpha=0.9, label="Reference")
pr_sim.plot(alpha=0.7, label="Model")
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping.train(
pr_ref, pr_hist, nquantiles=15, kind="*", group="time"
)
scen = QM.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_hist.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).
###Code
# 2nd try with adapt_freq
sim_ad, pth, dP0 = sdba.processing.adapt_freq(
pr_ref, pr_sim, thresh="0.05 mm d-1", group="time"
)
QM_ad = sdba.EmpiricalQuantileMapping.train(
pr_ref, sim_ad, nquantiles=15, kind="*", group="time"
)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_sim.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen_ad.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper("time.dayofyear", window=15)
Sca = sdba.Scaling.train(ref, hist, group=doy_win31, kind="+")
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group="time.dayofyear", kind="+")
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n = sdba.processing.normalize(ref, group=doy_win31, kind="+")
hist_n = sdba.processing.normalize(hist, group=doy_win31, kind="+")
QM = sdba.EmpiricalQuantileMapping.train(
ref_n, hist_n, nquantiles=15, group="time.month", kind="+"
)
scen_detrended = QM.adjust(sim_detrended, extrapolation="constant", interp="nearest")
scen = sim_fit.retrend(scen_detrended)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
sim.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension.Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents.train(reft, simt, group="time", crd_dims=["lon"])
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping.train(
reft, scen1, group="time", nquantiles=50, kind="+"
)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label="Reference")
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label="Simulation")
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label="Adjusted - PCA+EQM")
axPCA.set_xlabel("Point 1")
axPCA.set_ylabel("Point 2")
axPCA.set_title("PC-space")
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim="time")
simQ = simt.quantile(EQM.ds.quantiles, dim="time")
scen1Q = scen1.quantile(EQM.ds.quantiles, dim="time")
scen2Q = scen2.quantile(EQM.ds.quantiles, dim="time")
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label="No adj")
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label="PCA")
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label="PCA+EQM")
axQM.plot(
refQ.isel(lon=i), refQ.isel(lon=i), color="k", linestyle=":", label="Ideal"
)
axQM.set_title(f"QQ plot - Point {i + 1}")
axQM.set_xlabel("Reference")
axQM.set_xlabel("Model")
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label="Reference")
simt.isel(lon=0).plot(ax=axT, label="Unadjusted sim")
# scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label="PCA+EQM")
axT.legend()
axT.set_title("Timeseries - Point 1")
###Output
_____no_output_____
###Markdown
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.In the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.
###Code
from xclim.testing import open_dataset
from xclim.core.units import convert_units_to
dref = open_dataset(
"sdba/ahccd_1950-2013.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
).sel(time=slice("1981", "2010"))
dref = dref.assign(
tasmax=convert_units_to(dref.tasmax, "K"),
pr=convert_units_to(dref.pr, "kg m-2 s-1"),
)
dsim = open_dataset(
"sdba/CanESM2_1950-2100.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
)
dhist = dsim.sel(time=slice("1981", "2010"))
dsim = dsim.sel(time=slice("2041", "2070"))
dref
###Output
_____no_output_____
###Markdown
Perform an initial univariate adjustment.
###Code
# additive for tasmax
QDMtx = sdba.QuantileDeltaMapping.train(
dref.tasmax, dhist.tasmax, nquantiles=20, kind="+", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_tx = QDMtx.adjust(dhist.tasmax)
scens_tx = QDMtx.adjust(dsim.tasmax)
# remove == 0 values in pr:
dref["pr"] = sdba.processing.jitter_under_thresh(dref.pr, "0.01 mm d-1")
dhist["pr"] = sdba.processing.jitter_under_thresh(dhist.pr, "0.01 mm d-1")
dsim["pr"] = sdba.processing.jitter_under_thresh(dsim.pr, "0.01 mm d-1")
# multiplicative for pr
QDMpr = sdba.QuantileDeltaMapping.train(
dref.pr, dhist.pr, nquantiles=20, kind="*", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_pr = QDMpr.adjust(dhist.pr)
scens_pr = QDMpr.adjust(dsim.pr)
scenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr))
scens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))
###Output
_____no_output_____
###Markdown
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.
###Code
# Stack the variables (tasmax and pr)
ref = sdba.base.stack_variables(dref)
scenh = sdba.base.stack_variables(scenh)
scens = sdba.base.stack_variables(scens)
# Standardize
ref, _, _ = sdba.processing.standardize(ref)
allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), "time"))
hist = allsim.sel(time=scenh.time)
sim = allsim.sel(time=scens.time)
###Output
_____no_output_____
###Markdown
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.
###Code
from xclim import set_options
# See the advanced notebook for details on how this option work
with set_options(sdba_extra_output=True):
out = sdba.adjustment.NpdfTransform.adjust(
ref,
hist,
sim,
base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.
base_kws={"nquantiles": 20, "group": "time"},
n_iter=20, # perform 20 iteration
n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)
)
scenh = out.scenh.rename(time_hist="time") # Bias-adjusted historical period
scens = out.scen # Bias-adjusted future period
extra = out.drop_vars(["scenh", "scen"])
# Un-standardize (add the mean and the std back)
scenh = sdba.processing.unstandardize(scenh, savg, sstd)
scens = sdba.processing.unstandardize(scens, savg, sstd)
###Output
_____no_output_____
###Markdown
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` : 'ref' the argument that provides the order, 'sim' is the argument to reorder.
###Code
scenh = sdba.processing.reordering(hist, scenh, group="time")
scens = sdba.processing.reordering(sim, scens, group="time")
scenh = sdba.base.unstack_variables(scenh, "variables")
scens = sdba.base.unstack_variables(scens, "variables")
###Output
_____no_output_____
###Markdown
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.
###Code
from dask import compute
from dask.diagnostics import ProgressBar
tasks = [
scenh.isel(location=2).to_netcdf("mbcn_scen_hist_loc2.nc", compute=False),
scens.isel(location=2).to_netcdf("mbcn_scen_sim_loc2.nc", compute=False),
extra.escores.isel(location=2)
.to_dataset()
.to_netcdf("mbcn_escores_loc2.nc", compute=False),
]
with ProgressBar():
compute(tasks)
###Output
_____no_output_____
###Markdown
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.
###Code
scenh = xr.open_dataset("mbcn_scen_hist_loc2.nc")
fig, ax = plt.subplots()
dref.isel(location=2).tasmax.plot(ax=ax, label="Reference")
scenh.tasmax.plot(ax=ax, label="Adjusted", alpha=0.65)
dhist.isel(location=2).tasmax.plot(ax=ax, label="Simulated")
ax.legend()
escores = xr.open_dataarray("mbcn_escores_loc2.nc")
diff_escore = escores.differentiate("iterations")
diff_escore.plot()
plt.title("Difference of the subsequent e-scores.")
plt.ylabel("E-scores difference")
diff_escore
###Output
_____no_output_____
###Markdown
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.This presents examples, while a bit more info and the API are given on [this page](../sdba.rst).A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object. The object is created through the `.train` method of the class, and the simulation is adjusted with `.adjust`.
###Code
from __future__ import annotations
import cftime
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
%matplotlib inline
plt.style.use("seaborn")
plt.rcParams["figure.figsize"] = (11, 5)
# Create toy data to explore bias adjustment, here fake temperature timeseries
t = xr.cftime_range("2000-01-01", "2030-12-31", freq="D", calendar="noleap")
ref = xr.DataArray(
(
-20 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.1 * (t - t[0]).days / 365
), # "warming" of 1K per decade,
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
sim = xr.DataArray(
(
-18 * np.cos(2 * np.pi * t.dayofyear / 365)
+ 2 * np.random.random_sample((t.size,))
+ 273.15
+ 0.11 * (t - t[0]).days / 365
), # "warming" of 1.1K per decade
dims=("time",),
coords={"time": t},
attrs={"units": "K"},
)
ref = ref.sel(time=slice(None, "2015-01-01"))
hist = sim.sel(time=slice(None, "2015-01-01"))
ref.plot(label="Reference")
sim.plot(label="Model")
plt.legend()
from xclim import sdba
QM = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time", kind="+"
)
scen = QM.adjust(sim, extrapolation="constant", interp="nearest")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
###Code
QM_mo = sdba.EmpiricalQuantileMapping.train(
ref, hist, nquantiles=15, group="time.month", kind="+"
)
scen = QM_mo.adjust(sim, extrapolation="constant", interp="linear")
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
###Code
QM_mo.ds
QM_mo.ds.af.plot()
###Output
_____no_output_____
###Markdown
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
###Code
group = sdba.Grouper("time.dayofyear", window=31)
QM_doy = sdba.Scaling.train(ref, hist, group=group, kind="+")
scen = QM_doy.adjust(sim)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
hist.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
sim
QM_doy.ds.af.plot()
###Output
_____no_output_____
###Markdown
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating and training the adjustment object `Adj = Adjustment.train(obs, hist, **kwargs)` (from `xclim.sdba.adjustment`)- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
###Code
vals = np.random.randint(0, 1000, size=(t.size,)) / 100
vals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6
vals_sim = (
(1 + 0.1 * np.random.random_sample((t.size,)))
* (4 ** np.where(vals < 9.5, vals / 100, vals))
/ 3e6
)
pr_ref = xr.DataArray(
vals_ref, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_ref = pr_ref.sel(time=slice("2000", "2015"))
pr_sim = xr.DataArray(
vals_sim, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"}
)
pr_hist = pr_sim.sel(time=slice("2000", "2015"))
pr_ref.plot(alpha=0.9, label="Reference")
pr_sim.plot(alpha=0.7, label="Model")
plt.legend()
# 1st try without adapt_freq
QM = sdba.EmpiricalQuantileMapping.train(
pr_ref, pr_hist, nquantiles=15, kind="*", group="time"
)
scen = QM.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_hist.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).
###Code
# 2nd try with adapt_freq
sim_ad, pth, dP0 = sdba.processing.adapt_freq(
pr_ref, pr_sim, thresh="0.05 mm d-1", group="time"
)
QM_ad = sdba.EmpiricalQuantileMapping.train(
pr_ref, sim_ad, nquantiles=15, kind="*", group="time"
)
scen_ad = QM_ad.adjust(pr_sim)
pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference")
pr_sim.sel(time="2010").plot(alpha=0.7, label="Model - biased")
scen_ad.sel(time="2010").plot(alpha=0.6, label="Model - adjusted")
plt.legend()
###Output
_____no_output_____
###Markdown
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
###Code
doy_win31 = sdba.Grouper("time.dayofyear", window=15)
Sca = sdba.Scaling.train(ref, hist, group=doy_win31, kind="+")
sim_scl = Sca.adjust(sim)
detrender = sdba.detrending.PolyDetrend(degree=1, group="time.dayofyear", kind="+")
sim_fit = detrender.fit(sim_scl)
sim_detrended = sim_fit.detrend(sim_scl)
ref_n, _ = sdba.processing.normalize(ref, group=doy_win31, kind="+")
hist_n, _ = sdba.processing.normalize(hist, group=doy_win31, kind="+")
QM = sdba.EmpiricalQuantileMapping.train(
ref_n, hist_n, nquantiles=15, group="time.month", kind="+"
)
scen_detrended = QM.adjust(sim_detrended, extrapolation="constant", interp="nearest")
scen = sim_fit.retrend(scen_detrended)
ref.groupby("time.dayofyear").mean().plot(label="Reference")
sim.groupby("time.dayofyear").mean().plot(label="Model - biased")
scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2000-15", linestyle="--"
)
scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot(
label="Model - adjusted - 2015-30", linestyle="--"
)
plt.legend()
###Output
_____no_output_____
###Markdown
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension. An example is given in the [advanced notebook](sdba-advanced.ipynb).Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
###Code
# We are using xarray's "air_temperature" dataset
ds = xr.tutorial.open_dataset("air_temperature")
# To get an exagerated example we select different points
# here "lon" will be our dimension of two "spatially correlated" points
reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"])
simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"])
# Principal Components Adj, no grouping and use "lon" as the space dimensions
PCA = sdba.PrincipalComponents.train(reft, simt, group="time", crd_dim="lon")
scen1 = PCA.adjust(simt)
# QM, no grouping, 20 quantiles and additive adjustment
EQM = sdba.EmpiricalQuantileMapping.train(
reft, scen1, group="time", nquantiles=50, kind="+"
)
scen2 = EQM.adjust(scen1)
# some Analysis figures
fig = plt.figure(figsize=(12, 16))
gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig)
axPCA = plt.subplot(gs[0, :])
axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label="Reference")
axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label="Simulation")
axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label="Adjusted - PCA+EQM")
axPCA.set_xlabel("Point 1")
axPCA.set_ylabel("Point 2")
axPCA.set_title("PC-space")
axPCA.legend()
refQ = reft.quantile(EQM.ds.quantiles, dim="time")
simQ = simt.quantile(EQM.ds.quantiles, dim="time")
scen1Q = scen1.quantile(EQM.ds.quantiles, dim="time")
scen2Q = scen2.quantile(EQM.ds.quantiles, dim="time")
for i in range(2):
if i == 0:
axQM = plt.subplot(gs[1, 0])
else:
axQM = plt.subplot(gs[1, 1], sharey=axQM)
axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label="No adj")
axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label="PCA")
axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label="PCA+EQM")
axQM.plot(
refQ.isel(lon=i), refQ.isel(lon=i), color="k", linestyle=":", label="Ideal"
)
axQM.set_title(f"QQ plot - Point {i + 1}")
axQM.set_xlabel("Reference")
axQM.set_xlabel("Model")
axQM.legend()
axT = plt.subplot(gs[2, :])
reft.isel(lon=0).plot(ax=axT, label="Reference")
simt.isel(lon=0).plot(ax=axT, label="Unadjusted sim")
# scen1.isel(lon=0).plot(ax=axT, label='PCA only')
scen2.isel(lon=0).plot(ax=axT, label="PCA+EQM")
axT.legend()
axT.set_title("Timeseries - Point 1")
###Output
_____no_output_____
###Markdown
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.In the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.
###Code
from xclim.core.units import convert_units_to
from xclim.testing import open_dataset
dref = open_dataset(
"sdba/ahccd_1950-2013.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
).sel(time=slice("1981", "2010"))
dref = dref.assign(
tasmax=convert_units_to(dref.tasmax, "K"),
pr=convert_units_to(dref.pr, "kg m-2 s-1"),
)
dsim = open_dataset(
"sdba/CanESM2_1950-2100.nc", chunks={"location": 1}, drop_variables=["lat", "lon"]
)
dhist = dsim.sel(time=slice("1981", "2010"))
dsim = dsim.sel(time=slice("2041", "2070"))
dref
###Output
_____no_output_____
###Markdown
Perform an initial univariate adjustment.
###Code
# additive for tasmax
QDMtx = sdba.QuantileDeltaMapping.train(
dref.tasmax, dhist.tasmax, nquantiles=20, kind="+", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_tx = QDMtx.adjust(dhist.tasmax)
scens_tx = QDMtx.adjust(dsim.tasmax)
# remove == 0 values in pr:
dref["pr"] = sdba.processing.jitter_under_thresh(dref.pr, "0.01 mm d-1")
dhist["pr"] = sdba.processing.jitter_under_thresh(dhist.pr, "0.01 mm d-1")
dsim["pr"] = sdba.processing.jitter_under_thresh(dsim.pr, "0.01 mm d-1")
# multiplicative for pr
QDMpr = sdba.QuantileDeltaMapping.train(
dref.pr, dhist.pr, nquantiles=20, kind="*", group="time"
)
# Adjust both hist and sim, we'll feed both to the Npdf transform.
scenh_pr = QDMpr.adjust(dhist.pr)
scens_pr = QDMpr.adjust(dsim.pr)
scenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr))
scens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))
###Output
_____no_output_____
###Markdown
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.
###Code
# Stack the variables (tasmax and pr)
ref = sdba.processing.stack_variables(dref)
scenh = sdba.processing.stack_variables(scenh)
scens = sdba.processing.stack_variables(scens)
# Standardize
ref, _, _ = sdba.processing.standardize(ref)
allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), "time"))
hist = allsim.sel(time=scenh.time)
sim = allsim.sel(time=scens.time)
###Output
_____no_output_____
###Markdown
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.
###Code
from xclim import set_options
# See the advanced notebook for details on how this option work
with set_options(sdba_extra_output=True):
out = sdba.adjustment.NpdfTransform.adjust(
ref,
hist,
sim,
base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment.
base_kws={"nquantiles": 20, "group": "time"},
n_iter=20, # perform 20 iteration
n_escore=1000, # only send 1000 points to the escore metric (it is realy slow)
)
scenh = out.scenh.rename(time_hist="time") # Bias-adjusted historical period
scens = out.scen # Bias-adjusted future period
extra = out.drop_vars(["scenh", "scen"])
# Un-standardize (add the mean and the std back)
scenh = sdba.processing.unstandardize(scenh, savg, sstd)
scens = sdba.processing.unstandardize(scens, savg, sstd)
###Output
_____no_output_____
###Markdown
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` : 'ref' the argument that provides the order, 'sim' is the argument to reorder.
###Code
scenh = sdba.processing.reordering(hist, scenh, group="time")
scens = sdba.processing.reordering(sim, scens, group="time")
scenh = sdba.processing.unstack_variables(scenh)
scens = sdba.processing.unstack_variables(scens)
###Output
_____no_output_____
###Markdown
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.
###Code
from dask import compute
from dask.diagnostics import ProgressBar
tasks = [
scenh.isel(location=2).to_netcdf("mbcn_scen_hist_loc2.nc", compute=False),
scens.isel(location=2).to_netcdf("mbcn_scen_sim_loc2.nc", compute=False),
extra.escores.isel(location=2)
.to_dataset()
.to_netcdf("mbcn_escores_loc2.nc", compute=False),
]
with ProgressBar():
compute(tasks)
###Output
_____no_output_____
###Markdown
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.
###Code
scenh = xr.open_dataset("mbcn_scen_hist_loc2.nc")
fig, ax = plt.subplots()
dref.isel(location=2).tasmax.plot(ax=ax, label="Reference")
scenh.tasmax.plot(ax=ax, label="Adjusted", alpha=0.65)
dhist.isel(location=2).tasmax.plot(ax=ax, label="Simulated")
ax.legend()
escores = xr.open_dataarray("mbcn_escores_loc2.nc")
diff_escore = escores.differentiate("iterations")
diff_escore.plot()
plt.title("Difference of the subsequent e-scores.")
plt.ylabel("E-scores difference")
diff_escore
###Output
_____no_output_____ |
Tomato/train_tomato.ipynb | ###Markdown
First go to Edit/Notebook settings and set the hardware accelerator to ```GPU``` Clone repo, mont drive and install dependencies
###Code
!git clone https://github.com/victorpujolle/Tomato_detection
###Output
Cloning into 'Tomato_detection'...
remote: Enumerating objects: 36, done.[K
remote: Counting objects: 2% (1/36)[K
remote: Counting objects: 5% (2/36)[K
remote: Counting objects: 8% (3/36)[K
remote: Counting objects: 11% (4/36)[K
remote: Counting objects: 13% (5/36)[K
remote: Counting objects: 16% (6/36)[K
remote: Counting objects: 19% (7/36)[K
remote: Counting objects: 22% (8/36)[K
remote: Counting objects: 25% (9/36)[K
remote: Counting objects: 27% (10/36)[K
remote: Counting objects: 30% (11/36)[K
remote: Counting objects: 33% (12/36)[K
remote: Counting objects: 36% (13/36)[K
remote: Counting objects: 38% (14/36)[K
remote: Counting objects: 41% (15/36)[K
remote: Counting objects: 44% (16/36)[K
remote: Counting objects: 47% (17/36)[K
remote: Counting objects: 50% (18/36)[K
remote: Counting objects: 52% (19/36)[K
remote: Counting objects: 55% (20/36)[K
remote: Counting objects: 58% (21/36)[K
remote: Counting objects: 61% (22/36)[K
remote: Counting objects: 63% (23/36)[K
remote: Counting objects: 66% (24/36)[K
remote: Counting objects: 69% (25/36)[K
remote: Counting objects: 72% (26/36)[K
remote: Counting objects: 75% (27/36)[K
remote: Counting objects: 77% (28/36)[K
remote: Counting objects: 80% (29/36)[K
remote: Counting objects: 83% (30/36)[K
remote: Counting objects: 86% (31/36)[K
remote: Counting objects: 88% (32/36)[K
remote: Counting objects: 91% (33/36)[K
remote: Counting objects: 94% (34/36)[K
remote: Counting objects: 97% (35/36)[K
remote: Counting objects: 100% (36/36)[K
remote: Counting objects: 100% (36/36), done.[K
remote: Compressing objects: 100% (26/26), done.[K
remote: Total 997 (delta 13), reused 28 (delta 8), pack-reused 961[K
Receiving objects: 100% (997/997), 112.21 MiB | 43.62 MiB/s, done.
Resolving deltas: 100% (592/592), done.
###Markdown
Mount google drive where dataset is stored
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/Tomato_detection/
!python setup.py install
###Output
/content/Tomato_detection
WARNING:root:Fail load requirements file, so using default ones.
running install
running bdist_egg
running egg_info
creating mask_rcnn.egg-info
writing mask_rcnn.egg-info/PKG-INFO
writing dependency_links to mask_rcnn.egg-info/dependency_links.txt
writing top-level names to mask_rcnn.egg-info/top_level.txt
writing manifest file 'mask_rcnn.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'mask_rcnn.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/mrcnn
copying mrcnn/__init__.py -> build/lib/mrcnn
copying mrcnn/config.py -> build/lib/mrcnn
copying mrcnn/visualize.py -> build/lib/mrcnn
copying mrcnn/utils.py -> build/lib/mrcnn
copying mrcnn/parallel_model.py -> build/lib/mrcnn
copying mrcnn/model.py -> build/lib/mrcnn
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/__init__.py -> build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/config.py -> build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/visualize.py -> build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/utils.py -> build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/parallel_model.py -> build/bdist.linux-x86_64/egg/mrcnn
copying build/lib/mrcnn/model.py -> build/bdist.linux-x86_64/egg/mrcnn
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/config.py to config.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/visualize.py to visualize.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/utils.py to utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/parallel_model.py to parallel_model.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/mrcnn/model.py to model.cpython-36.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying mask_rcnn.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying mask_rcnn.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying mask_rcnn.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying mask_rcnn.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist/mask_rcnn-2.1-py3.6.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing mask_rcnn-2.1-py3.6.egg
Copying mask_rcnn-2.1-py3.6.egg to /usr/local/lib/python3.6/dist-packages
Adding mask-rcnn 2.1 to easy-install.pth file
Installed /usr/local/lib/python3.6/dist-packages/mask_rcnn-2.1-py3.6.egg
Processing dependencies for mask-rcnn==2.1
Finished processing dependencies for mask-rcnn==2.1
###Markdown
Mask R-CNN - Train on Shapes DatasetThis notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get goods results in a few minutes.
###Code
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import skimage
import itertools
import logging
import json
import re
import random
from collections import OrderedDict
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
# Root directory of the project
ROOT_DIR = os.path.abspath("./../")
print(os.listdir(ROOT_DIR))
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
###Output
['.config', 'drive', 'Tomato_detection', '.ipynb_checkpoints', 'sample_data']
Downloading pretrained model to /content/mask_rcnn_coco.h5 ...
... done downloading pretrained model!
###Markdown
Configurations
###Code
class TomatoConfig(Config):
"""Configuration for training on the toy dataset.
Derives from the base Config class and overrides some values.
"""
# Give the configuration a recognizable name
NAME = "tomato"
# We use a GPU with 12GB memory, which can fit two images.
# Adjust down if you use a smaller GPU.
IMAGES_PER_GPU = 2
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # Background + tomato
# Number of training steps per epoch
STEPS_PER_EPOCH = 100
# Skip detections with < 99% confidence
DETECTION_MIN_CONFIDENCE = 0.99
config = TomatoConfig()
config.display()
###Output
Configurations:
BACKBONE resnet101
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 2
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE None
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.9
DETECTION_NMS_THRESHOLD 0.3
FPN_CLASSIF_FC_LAYERS_SIZE 1024
GPU_COUNT 1
GRADIENT_CLIP_NORM 5.0
IMAGES_PER_GPU 2
IMAGE_CHANNEL_COUNT 3
IMAGE_MAX_DIM 1024
IMAGE_META_SIZE 14
IMAGE_MIN_DIM 800
IMAGE_MIN_SCALE 0
IMAGE_RESIZE_MODE square
IMAGE_SHAPE [1024 1024 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.001
LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 100
MEAN_PIXEL [123.7 116.8 103.9]
MINI_MASK_SHAPE (56, 56)
NAME tomato
NUM_CLASSES 2
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
PRE_NMS_LIMIT 6000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 100
TOP_DOWN_PYRAMID_SIZE 256
TRAIN_BN False
TRAIN_ROIS_PER_IMAGE 200
USE_MINI_MASK True
USE_RPN_ROIS True
VALIDATION_STEPS 50
WEIGHT_DECAY 0.0001
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetCreate a synthetic datasetExtend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:* load_image()* load_mask()* image_reference()
###Code
class TomatoDataset(utils.Dataset):
def load_tomato(self, dataset_dir, subset):
"""Load a subset of the Balloon dataset.
dataset_dir: Root directory of the dataset.
subset: Subset to load: train or val
"""
# Add classes. We have only one class to add.
self.add_class("tomato", 1, "tomato")
# Train or validation dataset?
assert subset in ["train", "val"]
dataset_dir = os.path.join(dataset_dir, subset)
# Load annotations
# VGG Image Annotator (up to version 1.6) saves each image in the form:
# { 'filename': '28503151_5b5b7ec140_b.jpg',
# 'regions': {
# '0': {
# 'region_attributes': {},
# 'shape_attributes': {
# 'all_points_x': [...],
# 'all_points_y': [...],
# 'name': 'polygon'}},
# ... more regions ...
# },
# 'size': 100202
# }
# We mostly care about the x and y coordinates of each region
# Note: In VIA 2.0, regions was changed from a dict to a list.
annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json")))
annotations = list(annotations.values()) # don't need the dict keys
# The VIA tool saves images in the JSON even if they don't have any
# annotations. Skip unannotated images.
annotations = [a for a in annotations if a['regions']]
# Add images
for a in annotations:
# Get the x, y coordinaets of points of the polygons that make up
# the outline of each object instance. These are stores in the
# shape_attributes (see json format above)
# The if condition is needed to support VIA versions 1.x and 2.x.
if type(a['regions']) is dict:
polygons = [r['shape_attributes'] for r in a['regions'].values()]
else:
polygons = [r['shape_attributes'] for r in a['regions']]
# load_mask() needs the image size to convert polygons to masks.
# Unfortunately, VIA doesn't include it in JSON, so we must read
# the image. This is only managable since the dataset is tiny.
image_path = os.path.join(dataset_dir, a['filename'])
image = skimage.io.imread(image_path)
height, width = image.shape[:2]
self.add_image(
"tomato",
image_id=a['filename'], # use file name as a unique image id
path=image_path,
width=width, height=height,
polygons=polygons)
def load_mask(self, image_id):
"""Generate instance masks for an image.
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
# If not a balloon dataset image, delegate to parent class.
image_info = self.image_info[image_id]
if image_info["source"] != "tomato":
return super(self.__class__, self).load_mask(image_id)
# Convert polygons to a bitmap mask of shape
# [height, width, instance_count]
info = self.image_info[image_id]
mask = np.zeros([info["height"], info["width"], len(info["polygons"])],
dtype=np.uint8)
for i, p in enumerate(info["polygons"]):
# Get indexes of pixels inside the polygon and set them to 1
rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
mask[rr, cc, i] = 1
# Return mask, and array of class IDs of each instance. Since we have
# one class ID only, we return an array of 1s
return mask.astype(np.bool), np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "tomato":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
###Output
_____no_output_____
###Markdown
Load datasetsdataset_dir should be the path of the dataset in the driveIn this case it was created using VGG Image Annotator (up to version 1.6) It saves each image in the form: { 'filename': '28503151_5b5b7ec140_b.jpg', 'regions': { '0': { 'region_attributes': {}, 'shape_attributes': { 'all_points_x': [...], 'all_points_y': [...], 'name': 'polygon'}}, ... more regions ... }, 'size': 100202 } We mostly care about the x and y coordinates of each regionThe dataset dir should contains 2 subfolder named ```train``` and ```val``` with the training and the validation data. In each of this subfolder, a file named ```via_region_data.json```Of course you can make your own dataset class for your own dataset.
###Code
dataset_dir = '../drive/My Drive/Real_dataset'
# Training dataset.
dataset_train = TomatoDataset()
dataset_train.load_tomato(dataset_dir, "train")
dataset_train.prepare()
# Validation dataset
dataset_val = TomatoDataset()
dataset_val.load_tomato(dataset_dir, "val")
dataset_val.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
###Output
_____no_output_____
###Markdown
Create Model
###Code
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
init_with = "coco"
if init_with == "coco":
model.load_weights(COCO_MODEL_PATH, by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
###Markdown
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
###Code
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=2,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
print(MODEL_DIR)
print(os.listdir(MODEL_DIR))
model_path = os.path.join(MODEL_DIR, "mask_rcnn_tomato.h5")
model.keras_model.save_weights(model_path)
# uncomment this to save the weights in your drive
#this one takes quite a long time
#MODEL_DIR_drive = '../drive/My Drive/logs_tomato'
#model_path = os.path.join(MODEL_DIR_drive, "mask_rcnn_tomato.h5")
#model.keras_model.save_weights(model_path)
###Output
/content/logs
['tomato20191119T0928']
###Markdown
Detection
###Code
class InferenceConfig(TomatoConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
###Output
Processing 1 images
image shape: (1024, 1024, 3) min: 0.00000 max: 255.00000 uint8
molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000 float64
image_metas shape: (1, 14) min: 0.00000 max: 1024.00000 int64
anchors shape: (1, 261888, 4) min: -0.35390 max: 1.29134 float32
###Markdown
Evaluation
###Code
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
###Output
_____no_output_____ |
deep-learning/CapsNET/TensorFlow_Implementation/CapsNET Testing.ipynb | ###Markdown
CapsNET**What you'll be needing is the following**:- Tensorflow (I'm using the latest TF version 1.3.x)- tqdm (Uses progress bar so you can follow the progress of your epochs)- numpy- MNIST dataset- utils.py (type `help(utils)` to see what it has ( where you download dataset and use `numpy`, `os`, `scipy` and do regular stuff such as 1) load MNIST data 2) Get batch data 3) save and merge the images- config.py (type `help(config)` to get more info) This one is the place where your *hyperparameters* and *env variables* sit- capsLayer.py (type `help(capsLayer)` for more info). This the Capsule Layer. This is what it takes:``` Capsule layer. Args: input: A 4-D tensor. num_units: integer, the length of the output vector of a capsule. with_routing: boolean, this capsule is routing with the lower-level layer capsule. num_outputs: the number of capsule in this layer. Returns: A 4-D tensor.```- capsNet.py (type `help(capsNet)` to get more details). Ley functions in thsi class are model architecture and loss.
###Code
! pip install tqdm
! mkdir -p data/mnist
! wget -c -P data/mnist http://yann.lecun.com/exdb/mnist/{train-images-idx3-ubyte.gz,train-labels-idx1-ubyte.gz,t10k-images-idx3-ubyte.gz,t10k-labels-idx1-ubyte.gz}
! gunzip data/mnist/*.gz
! ls data/mnist/
import tensorflow as tf
from tqdm import tqdm
from config import cfg
from capsNet import CapsNet
capsNet = CapsNet(is_training=cfg.is_training)
tf.logging.info("Graph is loaded")
sv = tf.train.Supervisor(graph=capsNet.graph,
logdir=cfg.logdir,
save_model_secs=0)
###Output
INFO:tensorflow:Seting up the main structure
INFO:tensorflow:Graph is loaded
###Markdown
I've changed couple of paramters in the config file to the following```pythonflags.DEFINE_float('m_plus', 0.9, 'the parameter of m plus')flags.DEFINE_float('m_minus', 0.1, 'the parameter of m minus')flags.DEFINE_float('lambda_val', 0.5, 'down weight of the loss for absent digit classes')flags.DEFINE_integer('batch_size', 256, 'batch size')flags.DEFINE_integer('epoch', 2000, 'epoch')flags.DEFINE_integer('iter_routing', 3, 'number of iterations in routing algorithm')```
###Code
#Start the session and train
with sv.managed_session() as sess:
num_batch = int(120000 / cfg.batch_size)
for epoch in range(cfg.epoch):
if sv.should_stop():
break
for step in tqdm(range(num_batch), total=num_batch, ncols=70, leave=False, unit='b'):
sess.run(capsNet.train_op)
global_step = sess.run(capsNet.global_step)
sv.saver.save(sess, cfg.logdir + '/model_epoch_%04d_step_%02d' % (epoch, global_step))
tf.logging.info("Done with training")
! python eval.py --is_training False
###Output
_____no_output_____ |
005_Python_Keywords_and_Identifiers.ipynb | ###Markdown
All the IPython Notebooks in **Python Introduction** lecture series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)** Python Keywords and IdentifiersIn this class, you will learn about keywords (reserved words in Python) and identifiers (names given to variables, functions, etc.). 1. Python KeywordsKeywords are the reserved words in Python.We cannot use a keyword as a **[variable](https://github.com/milaan9/01_Python_Introduction/blob/main/009_Python_Data_Types.ipynb)** name, **[function](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)** name or any other identifier. They are used to define the syntax and structure of the Python language.In Python, keywords are **case sensitive**.There are **33** keywords in Python 3.9. This number can vary slightly over the course of time.All the keywords except **`True`**, **`False`** and **`None`** are in lowercase and they must be written as they are. The **[list of all the keywords](https://github.com/milaan9/01_Python_Introduction/blob/main/Python_Keywords_List.ipynb)** is given below.**Keywords in Python**| | | | | ||:----|:----|:----|:----|:----|| **`False`** | **`await`** | **`else`** | **`import`** | **`pass`** || **`None`** | **`break`** | **`except`** | **`in`** | **`raise`** || **`True`** | **`class`** | **`finally`** | **`is`** | **`return`** || **`and`** | **`continue`** | **`for`** | **`lambda`** | **`try`** || **`as`** | **`def`** | **`from`** | **`nonlocal`** | **`while`** || **`assert`** | **`del`** | **`global`** | **`not`** | **`with`** || **`async`** | **`elif`** | **`if`** | **`or`** | **`yield`** |You can see this list any time by typing help **`keywords`** to the Python interpreter. Trying to create a variable with the same name as any reserved word results in an **error**:```python>>>for = 6File "", line 1for = 6 It will give error becasue "for" is keyword and we cannot use as a variable name. ^SyntaxError: invalid syntax```
###Code
for = 6 # It will give error becasue "for" is keyword and we cannot use as a variable name.
For = 6 # "for" is keyword but "For" is not keyword so we can use it as variable name
For
###Output
_____no_output_____
###Markdown
2. Python IdentifiersAn **identifier** is a name given to entities like **class, functions, variables, etc**. It helps to differentiate one entity from another. Rules for writing identifiers1. **Identifiers** can be a combination of letters in lowercase **(a to z)** or uppercase **(A to Z)** or digits **(0 to 9)** or an underscore **`_`**. Names like **`myClass`**, **`var_1`** and **`print_this_to_screen`**, all are valid example. 2. An identifier cannot start with a digit. **`1variable`** is invalid, but **`variable1`** is perfectly fine. 3. Keywords cannot be used as identifiers```python>>>global = 3File "", line 1 global = 3 because "global" is a keyword ^SyntaxError: invalid syntax```
###Code
global = 3 # because "global" is a keyword
###Output
_____no_output_____
###Markdown
4. We cannot use special symbols like **!**, **@**, ****, $, % , etc. in our identifier.```python>>>m@ = 3File "", line 1 m@ = 3 ^SyntaxError: invalid syntax```
###Code
m@ = 3
###Output
_____no_output_____
###Markdown
Things to RememberPython is a case-sensitive language. This means, **`Variable`** and **`variable`** are not the same.Always give the identifiers a name that makes sense. While **`c = 10`** is a valid name, writing **`count = 10`** would make more sense, and it would be easier to figure out what it represents when you look at your code after a long gap.Multiple words can be separated using an underscore, like **`this_is_a_long_variable`**.
###Code
this_is_a_long_variable = 6+3
this_is_a_long_variable
add_5_and_3 = 6+3
add_5_and_3
###Output
_____no_output_____
###Markdown
All the IPython Notebooks in **Python Introduction** lecture series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)** Python Keywords and IdentifiersIn this class, you will learn about keywords (reserved words in Python) and identifiers (names given to variables, functions, etc.). 1. Python KeywordsKeywords are the reserved words in Python.We cannot use a keyword as a **[variable](https://github.com/milaan9/01_Python_Introduction/blob/main/009_Python_Data_Types.ipynb)** name, **[function](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)** name or any other identifier. They are used to define the syntax and structure of the Python language.In Python, keywords are **case sensitive**.There are **36** keywords in Python 3.9. This number can vary slightly over the course of time.All the keywords except **`True`**, **`False`** and **`None`** are in lowercase and they must be written as they are. The **[list of all the keywords](https://github.com/milaan9/01_Python_Introduction/blob/main/Python_Keywords_List.ipynb)** is given below.**Keywords in Python**| | | | ||:----|:----|:----|:----|| **`False`** | **`break`** | **`for`** | **`not`** || **`None`** | **`class`** | **`from`** | **`or`** || **`True`** | **`continue`** | **`global`** | **`pass`** || **`__peg_parser__`** |**`def`** | **`if`** | **`raise`** || **`and`** | **`del`** | **`import`** | **`return`** || **`as`** | **`elif`** | **`in`** | **`try`** || **`assert`** | **`else`** | **`is`** | **`while`** || **`async`** | **`except`** | **`lambda`** | **`with`** || **`await`** | **`finally`** | **`nonlocal`** | **`yield`** |You can see this list any time by typing help **`keywords`** to the Python interpreter. Trying to create a variable with the same name as any reserved word results in an **error**:```python>>>for = 6File "", line 1for = 6 It will give error becasue "for" is keyword and we cannot use as a variable name. ^SyntaxError: invalid syntax```
###Code
for = 6 # It will give error becasue "for" is keyword and we cannot use as a variable name.
For = 6 # "for" is keyword but "For" is not keyword so we can use it as variable name
For
###Output
_____no_output_____
###Markdown
2. Python IdentifiersAn **identifier** is a name given to entities like **class, functions, variables, etc**. It helps to differentiate one entity from another. Rules for writing identifiers1. **Identifiers** can be a combination of letters in lowercase **(a to z)** or uppercase **(A to Z)** or digits **(0 to 9)** or an underscore **`_`**. Names like **`myClass`**, **`var_1`** and **`print_this_to_screen`**, all are valid example. 2. An identifier cannot start with a digit. **`1variable`** is invalid, but **`variable1`** is perfectly fine. 3. Keywords cannot be used as identifiers```python>>>global = 3File "", line 1 global = 3 because "global" is a keyword ^SyntaxError: invalid syntax```
###Code
global = 3 # because "global" is a keyword
###Output
_____no_output_____
###Markdown
4. We cannot use special symbols like **!**, **@**, ****, $, % , etc. in our identifier.```python>>>m@ = 3File "", line 1 m@ = 3 ^SyntaxError: invalid syntax```
###Code
m@ = 3
###Output
_____no_output_____
###Markdown
Things to RememberPython is a case-sensitive language. This means, **`Variable`** and **`variable`** are not the same.Always give the identifiers a name that makes sense. While **`c = 10`** is a valid name, writing **`count = 10`** would make more sense, and it would be easier to figure out what it represents when you look at your code after a long gap.Multiple words can be separated using an underscore, like **`this_is_a_long_variable`**.
###Code
this_is_a_long_variable = 6+3
this_is_a_long_variable
add_6_and_3 = 6+3
add_6_and_3
###Output
_____no_output_____
###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)** Python Keywords and IdentifiersIn this class, you will learn about keywords (reserved words in Python) and identifiers (names given to variables, functions, etc.). 1. Python KeywordsKeywords are the reserved words in Python.We cannot use a keyword as a **[variable](https://github.com/milaan9/01_Python_Introduction/blob/main/009_Python_Data_Types.ipynb)** name, **[function](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)** name or any other identifier. They are used to define the syntax and structure of the Python language.In Python, keywords are **case sensitive**.There are **33** keywords in Python 3.9. This number can vary slightly over the course of time.All the keywords except **`True`**, **`False`** and **`None`** are in lowercase and they must be written as they are. The **[list of all the keywords](https://github.com/milaan9/01_Python_Introduction/blob/main/Python_Keywords_List.ipynb)** is given below.**Keywords in Python**| | | | | ||:----|:----|:----|:----|:----|| **`False`** | **`await`** | **`else`** | **`import`** | **`pass`** || **`None`** | **`break`** | **`except`** | **`in`** | **`raise`** || **`True`** | **`class`** | **`finally`** | **`is`** | **`return`** || **`and`** | **`continue`** | **`for`** | **`lambda`** | **`try`** || **`as`** | **`def`** | **`from`** | **`nonlocal`** | **`while`** || **`assert`** | **`del`** | **`global`** | **`not`** | **`with`** || **`async`** | **`elif`** | **`if`** | **`or`** | **`yield`** |You can see this list any time by typing help **`keywords`** to the Python interpreter. Trying to create a variable with the same name as any reserved word results in an **error**:```python>>>for = 6File "", line 1for = 6 It will give error becasue "for" is keyword and we cannot use as a variable name. ^SyntaxError: invalid syntax```
###Code
for = 6 # It will give error becasue "for" is keyword and we cannot use as a variable name.
For = 6 # "for" is keyword but "For" is not keyword so we can use it as variable name
For
###Output
_____no_output_____
###Markdown
2. Python IdentifiersAn **identifier** is a name given to entities like **class, functions, variables, etc**. It helps to differentiate one entity from another. Rules for writing identifiers1. **Identifiers** can be a combination of letters in lowercase **(a to z)** or uppercase **(A to Z)** or digits **(0 to 9)** or an underscore **`_`**. Names like **`myClass`**, **`var_1`** and **`print_this_to_screen`**, all are valid example. 2. An identifier cannot start with a digit. **`1variable`** is invalid, but **`variable1`** is perfectly fine. 3. Keywords cannot be used as identifiers```python>>>global = 3File "", line 1 global = 3 because "global" is a keyword ^SyntaxError: invalid syntax```
###Code
global = 3 # because "global" is a keyword
###Output
_____no_output_____
###Markdown
4. We cannot use special symbols like **!**, **@**, ****, $, % , etc. in our identifier.```python>>>m@ = 3File "", line 1 m@ = 3 ^SyntaxError: invalid syntax```
###Code
m@ = 3
###Output
_____no_output_____
###Markdown
Things to RememberPython is a case-sensitive language. This means, **`Variable`** and **`variable`** are not the same.Always give the identifiers a name that makes sense. While **`c = 10`** is a valid name, writing **`count = 10`** would make more sense, and it would be easier to figure out what it represents when you look at your code after a long gap.Multiple words can be separated using an underscore, like **`this_is_a_long_variable`**.
###Code
this_is_a_long_variable = 6+3
this_is_a_long_variable
add_5_and_3 = 6+3
add_5_and_3
###Output
_____no_output_____ |
04_Gradients_Color_Spaces/4_3_Apply_Sobel_Filter.ipynb | ###Markdown
Applying SobelHere's your chance to write a function that will be useful for the Advanced Lane-Finding Project at the end of this lesson! Your goal in this exercise is to identify pixels where the gradient of an image falls within a specified threshold range. Example Pass in **img** and set the parameter **orient** as 'x' or 'y' to take either the xx or yy gradient. Set **min_thresh**, and *max_thresh* to specify the range to select for **binary output**. You can use exclusive () or inclusive (**=**) thresholding.**NOTE**: Your output should be an array of the same size as the input image. The output array elements should be 1 where gradients were in the threshold range, and 0 everywhere else.
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import pickle
# Read in an image and grayscale it
image = mpimg.imread('img/signs_vehicles_xygrad.png')
# Define a function that applies Sobel x or y,
# then takes an absolute value and applies a threshold.
# Note: calling your function with orient='x', thresh_min=5, thresh_max=100
# should produce output like the example image shown above this quiz.
def abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255):
# Apply the following steps to img
# 1) Convert to grayscale
# 2) Take the derivative in x or y given orient = 'x' or 'y'
# 3) Take the absolute value of the derivative or gradient
# 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8
# 5) Create a mask of 1's where the scaled gradient magnitude
# is > thresh_min and < thresh_max
# 6) Return this mask as your binary_output image
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Apply x or y gradient with the OpenCV Sobel() function
# and take the absolute value
if orient == 'x':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0))
if orient == 'y':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1))
# Rescale back to 8 bit integer
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
# Create a copy and apply the threshold
binary_output = np.zeros_like(scaled_sobel)
# Here I'm using inclusive (>=, <=) thresholds, but exclusive is ok too
binary_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
#binary_output = np.copy(img) # Remove this line
return binary_output
# Run the function
grad_binary = abs_sobel_thresh(image, orient='x', thresh_min=20, thresh_max=100)
# Plot the result
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(image)
ax1.set_title('Original Image', fontsize=50)
ax2.imshow(grad_binary, cmap='gray')
ax2.set_title('Thresholded Gradient', fontsize=50)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
plt.show()
###Output
_____no_output_____ |
site/en/tutorials/structured_data/preprocessing_layers.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization()
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_values=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(max_tokens=index.vocab_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras preprocessing layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data, such as tabular data, using a simplified version of the PetFinder dataset from a Kaggle competition stored in a CSV file.You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [Keras preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV file to features used to train the model. The goal is to predict if a pet will be adopted.This tutorial contains complete code for:* Loading a CSV file into a DataFrame using pandas.* Building an input pipeline to batch and shuffle the rows using `tf.data`. (Visit [tf.data: Build TensorFlow input pipelines](../../guide/data.ipynb) for more details.)* Mapping from columns in the CSV file to features used to train the model with the Keras preprocessing layers.* Building, training, and evaluating a model using the Keras built-in methods. Note: This tutorial is similar to [Classify structured data with feature columns](../structured_data/feature_columns.ipynb). This version uses the [Keras preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) instead of the `tf.feature_column` API, as the former are more intuitive and can be easily included inside your model to simplify deployment. The PetFinder.my mini datasetThere are several thousand rows in the PetFinder.my mini's CSV dataset file, where each row describes a pet (a dog or a cat) and each column describes an attribute, such as age, breed, color, and so on.In the dataset's summary below, notice there are mostly numerical and categorical columns. In this tutorial, you will only be dealing with those two feature types, dropping `Description` (a free text feature) and `AdoptionSpeed` (a classification feature) during data preprocessing.| Column | Pet description | Feature type | Data type || --------------- | ----------------------------- | -------------- | --------- || `Type` | Type of animal (`Dog`, `Cat`) | Categorical | String || `Age` | Age | Numerical | Integer || `Breed1` | Primary breed | Categorical | String || `Color1` | Color 1 | Categorical | String || `Color2` | Color 2 | Categorical | String || `MaturitySize` | Size at maturity | Categorical | String || `FurLength` | Fur length | Categorical | String || `Vaccinated` | Pet has been vaccinated | Categorical | String || `Sterilized` | Pet has been sterilized | Categorical | String || `Health` | Health condition | Categorical | String || `Fee` | Adoption fee | Numerical | Integer || `Description` | Profile write-up | Text | String || `PhotoAmt` | Total uploaded photos | Numerical | Integer || `AdoptionSpeed` | Categorical speed of adoption | Classification | Integer | Import TensorFlow and other libraries
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
tf.__version__
###Output
_____no_output_____
###Markdown
Load the dataset and read it into a pandas DataFramepandas is a Python library with many helpful utilities for loading and working with structured data. Use `tf.keras.utils.get_file` to download and extract the CSV file with the PetFinder.my mini dataset, and load it into a DataFrame with `pandas.read_csv`:
###Code
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
###Output
_____no_output_____
###Markdown
Inspect the dataset by checking the first five rows of the DataFrame:
###Code
dataframe.head()
###Output
_____no_output_____
###Markdown
Create a target variableThe original task in Kaggle's PetFinder.my Adoption Prediction competition was to predict the speed at which a pet will be adopted (e.g. in the first week, the first month, the first three months, and so on).In this tutorial, you will simplify the task by transforming it into a binary classification problem, where you simply have to predict whether a pet was adopted or not.After modifying the `AdoptionSpeed` column, `0` will indicate the pet was not adopted, and `1` will indicate it was.
###Code
# In the original dataset, `'AdoptionSpeed'` of `4` indicates
# a pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop unused features.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the DataFrame into training, validation, and test setsThe dataset is in a single pandas DataFrame. Split it into training, validation, and test sets using a, for example, 80:10:10 ratio, respectively:
###Code
train, val, test = np.split(dataframe.sample(frac=1), [int(0.8*len(dataframe)), int(0.9*len(dataframe))])
print(len(train), 'training examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, create a utility function that converts each training, validation, and test set DataFrame into a `tf.data.Dataset`, then shuffles and batches the data.Note: If you were working with a very large CSV file (so large that it does not fit into memory), you would use the `tf.data` API to read it from disk directly. That is not covered in this tutorial.
###Code
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
df = dataframe.copy()
labels = df.pop('target')
df = {key: value[:,tf.newaxis] for key, value in dataframe.items()}
ds = tf.data.Dataset.from_tensor_slices((dict(df), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now, use the newly created function (`df_to_dataset`) to check the format of the data the input pipeline helper function returns by calling it on the training data, and use a small batch size to keep the output readable:
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
As the output demonstrates, the training set returns a dictionary of column names (from the DataFrame) that map to column values from rows. Apply the Keras preprocessing layersThe Keras preprocessing layers allow you to build Keras-native input processing pipelines, which can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel.In this tutorial, you will use the following four preprocessing layers to demonstrate how to perform preprocessing, structured data encoding, and feature engineering:- `tf.keras.layers.Normalization`: Performs feature-wise normalization of input features.- `tf.keras.layers.CategoryEncoding`: Turns integer categorical features into one-hot, multi-hot, or tf-idfdense representations.- `tf.keras.layers.StringLookup`: Turns string categorical values into integer indices.- `tf.keras.layers.IntegerLookup`: Turns integer categorical values into integer indices.You can learn more about the available layers in the [Working with preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) guide.- For _numerical features_ of the PetFinder.my mini dataset, you will use a `tf.keras.layers.Normalization` layer to standardize the distribution of the data.- For _categorical features_, such as pet `Type`s (`Dog` and `Cat` strings), you will transform them to multi-hot encoded tensors with `tf.keras.layers.CategoryEncoding`. Numerical columnsFor each numeric feature in the PetFinder.my mini dataset, you will use a `tf.keras.layers.Normalization` layer to standardize the distribution of the data.Define a new utility function that returns a layer which applies feature-wise normalization to numerical features using that Keras preprocessing layer:
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for the feature.
normalizer = layers.Normalization(axis=None)
# Prepare a Dataset that only yields the feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
###Output
_____no_output_____
###Markdown
Next, test the new function by calling it on the total uploaded pet photo features to normalize `'PhotoAmt'`:
###Code
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you have many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single `tf.keras.layers.Normalization` layer. Categorical columnsPet `Type`s in the dataset are represented as strings—`Dog`s and `Cat`s—which need to be multi-hot encoded before being fed into the model. The `Age` feature Define another new utility function that returns a layer which maps values from a vocabulary to integer indices and multi-hot encodes the features using the `tf.keras.layers.StringLookup`, `tf.keras.layers.IntegerLookup`, and `tf.keras.CategoryEncoding` preprocessing layers:
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a layer that turns strings into integer indices.
if dtype == 'string':
index = layers.StringLookup(max_tokens=max_tokens)
# Otherwise, create a layer that turns integer values into integer indices.
else:
index = layers.IntegerLookup(max_tokens=max_tokens)
# Prepare a `tf.data.Dataset` that only yields the feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Encode the integer indices.
encoder = layers.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply multi-hot encoding to the indices. The lambda function captures the
# layer, so you can use them, or include them in the Keras Functional model later.
return lambda feature: encoder(index(feature))
###Output
_____no_output_____
###Markdown
Test the `get_category_encoding_layer` function by calling it on pet `'Type'` features to turn them into multi-hot encoded tensors:
###Code
test_type_col = train_features['Type']
test_type_layer = get_category_encoding_layer(name='Type',
dataset=train_ds,
dtype='string')
test_type_layer(test_type_col)
###Output
_____no_output_____
###Markdown
Repeat the process on the pet `'Age'` features:
###Code
test_age_col = train_features['Age']
test_age_layer = get_category_encoding_layer(name='Age',
dataset=train_ds,
dtype='int64',
max_tokens=5)
test_age_layer(test_age_col)
###Output
_____no_output_____
###Markdown
Preprocess selected features to train the model onYou have learned how to use several types of Keras preprocessing layers. Next, you will:- Apply the preprocessing utility functions defined earlier on 13 numerical and categorical features from the PetFinder.my mini dataset.- Add all the feature inputs to a list.As mentioned in the beginning, to train the model, you will use the PetFinder.my mini dataset's numerical (`'PhotoAmt'`, `'Fee'`) and categorical (`'Age'`, `'Type'`, `'Color1'`, `'Color2'`, `'Gender'`, `'MaturitySize'`, `'FurLength'`, `'Vaccinated'`, `'Sterilized'`, `'Health'`, `'Breed1'`) features.Note: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size of 256:
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Normalize the numerical features (the number of pet photos and the adoption fee), and add them to one list of inputs called `encoded_features`:
###Code
all_inputs = []
encoded_features = []
# Numerical features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
###Output
_____no_output_____
###Markdown
Turn the integer categorical values from the dataset (the pet age) into integer indices, perform multi-hot encoding, and add the resulting feature inputs to `encoded_features`:
###Code
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer(name='Age',
dataset=train_ds,
dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
###Output
_____no_output_____
###Markdown
Repeat the same step for the string categorical values:
###Code
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(name=header,
dataset=train_ds,
dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model The next step is to create a model using the [Keras Functional API](https://www.tensorflow.org/guide/keras/functional). For the first layer in your model, merge the list of feature inputs—`encoded_features`—into one vector via concatenation with `tf.keras.layers.concatenate`.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
###Output
_____no_output_____
###Markdown
Configure the model with Keras `Model.compile`:
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize the connectivity graph:
###Code
# Use `rankdir='LR'` to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Next, train and test the model:
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Perform inferenceThe model you have developed can now classify a row from a CSV file directly after you've included the preprocessing layers inside the model itself.You can now [save and reload the Keras model](../keras/save_and_load.ipynb) with `Model.save` and `Model.load_model` before performing inference on new data:
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call the Keras `Model.predict` method. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (`Model`s only process batches of data, not single samples).2. Call `tf.convert_to_tensor` on each feature.
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
tf.__version__
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization()
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_tokens=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
tf.__version__
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization(axis=None)
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_tokens=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization()
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_values=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(max_tokens=index.vocab_size())
# Prepare a Dataset that only yields our feature.
feature_ds = feature_ds.map(index)
# Learn the space of possible indices.
encoder.adapt(feature_ds)
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://keras.io/guides/preprocessing_layers/) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization()
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
_____no_output_____
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_values=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(max_tokens=index.vocab_size())
# Prepare a Dataset that only yields our feature.
feature_ds = feature_ds.map(index)
# Learn the space of possible indices.
encoder.adapt(feature_ds)
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
_____no_output_____
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
_____no_output_____ |
notebooks/plots/bar.ipynb | ###Markdown
Documentation by example for `shap.plots.bar`This notebook is designed to demonstrate (and so document) how to use the `shap.plots.bar` function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is classification task to predict if people made over 50k in the 90s). Warning! This notebook documents the new SHAP API, and that API is still stablizing over the coming weeks.
###Code
import xgboost
import shap
# train XGBoost model
X,y = shap.datasets.adult()
model = xgboost.XGBClassifier().fit(X, y)
# compute SHAP values
explainer = shap.Explainer(model, X)
shap_values = explainer(X)
###Output
100%|===================| 32420/32561 [00:58<00:00]
###Markdown
Global bar plotPassing a matrix of SHAP values to the bar plot function creates a global feature importance plot, where the global importance of each feature is taken to be the mean absolute value for that feature over all the given samples.
###Code
shap.plots.bar(shap_values)
###Output
_____no_output_____
###Markdown
By default the bar plot only shows a maximum of ten bars, but this can be controlled with the `max_display` parameter:
###Code
shap.plots.bar(shap_values, max_display=12)
###Output
_____no_output_____
###Markdown
Local bar plotPassing a row of SHAP values to the bar plot function creates a local feature importance plot, where the bars are the SHAP values for each feature. Note that the feature values are show in gray to the left of the feature names.
###Code
shap.plots.bar(shap_values[0])
###Output
_____no_output_____
###Markdown
Using feature clusteringOften features in datasets are partially or fully redundant with each other. Where redudant means that a model could use either feature and still get same accuracy. To find these features practitioners will often compute correlation matrices among the features, or use some type of clustering method. When working with SHAP we recommend a more direct approach that measures feature redundancy through model loss comparisions. The `shap.utils.hclust` method can do this and build a hierarchical clustering of the feature by training XGBoost models to predict the outcome for each pair of input features. For typical tabular dataset this results in much more accurate measures of feature redundancy than you would get from unsupervised methods like correlation.Once we compute such a clustering we can then pass it to the bar plot so we can simultainously visualize both the feature redundancy structure and the feature importances. Note that by default we don't show all of the clustering structure, but only the parts of the clustering with distance < 0.5. Distance in the clustering is assumed to be scaled roughly between 0 and 1, where 0 distance means the features perfectly redundant and 1 means they are completely independent. In the plot below we see that only relationship and marital status have more that 50% redundany, so they are the only features grouped in the bar plot:
###Code
clustering = shap.utils.hclust(X, y) # by default this trains (X.shape[1] choose 2) 2-feature XGBoost models
shap.plots.bar(shap_values, clustering=clustering)
###Output
_____no_output_____
###Markdown
If we want to see more of the clustering structure we can adjust the `cluster_threshold` parameter from 0.5 to 0.9. Note that as we increase the threshold we constrain the ordering of the features to follow valid cluster leaf orderings. The bar plot sorts each cluster and sub-cluster feature importance values in that cluster in an attempt to put the most important features at the top.
###Code
shap.plots.bar(shap_values, clustering=clustering, cluster_threshold=0.9)
###Output
_____no_output_____
###Markdown
Note that some explainers use a clustering structure during the explanation process. They do this both to avoid perturbing features in unrealistic ways while explaining a model, and for the sake of computational performance. When you compute SHAP explanations using these methods they come with a clustering included in the Explanation object. When the bar plot find such a clustering it uses it without you needing to explicitly pass it through the `clustering` parameter:
###Code
# only model agnostic methods support shap.maskers.TabularPartitions right now so we wrap our model as a function
def f(x):
return model.predict(x, output_margin=True)
# define a partition masker that uses our clustering
masker = shap.maskers.Partition(X, clustering=clustering)
# explain the model again
explainer = shap.Explainer(f, masker)
shap_values_partition = explainer(X[:100])
shap.plots.bar(shap_values_partition)
shap.plots.bar(shap_values_partition, cluster_threshold=2)
shap.plots.bar(shap_values_partition[0], cluster_threshold=2)
###Output
_____no_output_____
###Markdown
Documentation by example for `shap.plots.bar`This notebook is designed to demonstrate (and so document) how to use the `shap.plots.bar` function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is classification task to predict if people made over 50k in the 90s).
###Code
import xgboost
import shap
# train XGBoost model
X,y = shap.datasets.adult()
model = xgboost.XGBClassifier(nestimators=100, max_depth=2).fit(X, y)
# compute SHAP values
bg = shap.utils.sample(X, 100)
explainer = shap.Explainer(model, bg)
shap_values = explainer(X[:500])
###Output
_____no_output_____
###Markdown
Global bar plotPassing a matrix of SHAP values to the bar plot function creates a global feature importance plot, where the global importance of each feature is taken to be the mean absolute value for that feature over all the given samples.
###Code
shap.plots.bar(shap_values)
###Output
_____no_output_____
###Markdown
By default the bar plot only shows a maximum of ten bars, but this can be controlled with the `max_display` parameter:
###Code
shap.plots.bar(shap_values, max_display=12)
###Output
_____no_output_____
###Markdown
Local bar plotPassing a row of SHAP values to the bar plot function creates a local feature importance plot, where the bars are the SHAP values for each feature. Note that the feature values are show in gray to the left of the feature names.
###Code
shap.plots.bar(shap_values[0])
###Output
_____no_output_____
###Markdown
Using feature clusteringOften features in datasets are partially or fully redundant with each other. Where redudant means that a model could use either feature and still get same accuracy. To find these features practitioners will often compute correlation matrices among the features, or use some type of clustering method. When working with SHAP we recommend a more direct approach that measures feature redundancy through model loss comparisions. The `shap.utils.hclust` method can do this and build a hierarchical clustering of the feature by training XGBoost models to predict the outcome for each pair of input features. For typical tabular dataset this results in much more accurate measures of feature redundancy than you would get from unsupervised methods like correlation.Once we compute such a clustering we can then pass it to the bar plot so we can simultainously visualize both the feature redundancy structure and the feature importances. Note that by default we don't show all of the clustering structure, but only the parts of the clustering with distance < 0.5. Distance in the clustering is assumed to be scaled roughly between 0 and 1, where 0 distance means the features perfectly redundant and 1 means they are completely independent. In the plot below we see that only relationship and marital status have more that 50% redundany, so they are the only features grouped in the bar plot:
###Code
clustering = shap.utils.hclust(X, y) # by default this trains (X.shape[1] choose 2) 2-feature XGBoost models
shap.plots.bar(shap_values, clustering=clustering)
###Output
_____no_output_____
###Markdown
If we want to see more of the clustering structure we can adjust the `cluster_threshold` parameter from 0.5 to 0.9. Note that as we increase the threshold we constrain the ordering of the features to follow valid cluster leaf orderings. The bar plot sorts each cluster and sub-cluster feature importance values in that cluster in an attempt to put the most important features at the top.
###Code
shap.plots.bar(shap_values, clustering=clustering, cluster_threshold=0.9)
###Output
_____no_output_____
###Markdown
Note that some explainers use a clustering structure during the explanation process. They do this both to avoid perturbing features in unrealistic ways while explaining a model, and for the sake of computational performance. When you compute SHAP explanations using these methods they come with a clustering included in the Explanation object. When the bar plot find such a clustering it uses it without you needing to explicitly pass it through the `clustering` parameter:
###Code
# only model agnostic methods support shap.maskers.TabularPartitions right now so we wrap our model as a function
def f(x):
return model.predict(x, output_margin=True)
# define a partition masker that uses our clustering
masker = shap.maskers.TabularPartitions(bg, clustering=clustering)
# explain the model again
explainer = shap.Explainer(f, masker)
shap_values_partition = explainer(X[:100])
shap.plots.bar(shap_values_partition)
shap.plots.bar(shap_values_partition, cluster_threshold=2)
shap.plots.bar(shap_values_partition[0], cluster_threshold=2)
###Output
_____no_output_____
###Markdown
Documentation by example for `shap.plots.bar`This notebook is designed to demonstrate (and so document) how to use the `shap.plots.bar` function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is classification task to predict if people made over 50k in the 90s).
###Code
import xgboost
import shap
# train XGBoost model
X,y = shap.datasets.adult()
model = xgboost.XGBClassifier(nestimators=100, max_depth=2).fit(X, y)
# compute SHAP values
bg = shap.utils.sample(X, 100)
explainer = shap.Explainer(model, bg)
shap_values = explainer(X[:500])
###Output
_____no_output_____
###Markdown
Global bar plotPassing a matrix of SHAP values to the bar plot function creates a global feature importance plot, where the global importance of each feature is taken to be the mean absolute value for that feature over all the given samples.
###Code
shap.plots.bar(shap_values)
###Output
_____no_output_____
###Markdown
By default the bar plot only shows a maximum of ten bars, but this can be controlled with the `max_display` parameter:
###Code
shap.plots.bar(shap_values, max_display=12)
###Output
_____no_output_____
###Markdown
Local bar plotPassing a row of SHAP values to the bar plot function creates a local feature importance plot, where the bars are the SHAP values for each feature. Note that the feature values are show in gray to the left of the feature names.
###Code
shap.plots.bar(shap_values[0])
###Output
_____no_output_____
###Markdown
Using feature clusteringOften features in datasets are partially or fully redundant with each other. Where redudant means that a model could use either feature and still get same accuracy. To find these features practitioners will often compute correlation matrices among the features, or use some type of clustering method. When working with SHAP we recommend a more direct approach that measures feature redundancy through model loss comparisions. The `shap.utils.hclust` method can do this and build a hierarchical clustering of the feature by training XGBoost models to predict the outcome for each pair of input features. For typical tabular dataset this results in much more accurate measures of feature redundancy than you would get from unsupervised methods like correlation.Once we compute such a clustering we can then pass it to the bar plot so we can simultainously visualize both the feature redundancy structure and the feature importances. Note that by default we don't show all of the clustering structure, but only the parts of the clustering with distance < 0.5. Distance in the clustering is assumed to be scaled roughly between 0 and 1, where 0 distance means the features perfectly redundant and 1 means they are completely independent. In the plot below we see that only relationship and marital status have more that 50% redundany, so they are the only features grouped in the bar plot:
###Code
clustering = shap.utils.hclust(X, y) # by default this trains (X.shape[1] choose 2) 2-feature XGBoost models
shap.plots.bar(shap_values, clustering=clustering)
###Output
_____no_output_____
###Markdown
If we want to see more of the clustering structure we can adjust the `cluster_threshold` parameter from 0.5 to 0.9. Note that as we increase the threshold we constrain the ordering of the features to follow valid cluster leaf orderings. The bar plot sorts each cluster and sub-cluster feature importance values in that cluster in an attempt to put the most important features at the top.
###Code
shap.plots.bar(shap_values, clustering=clustering, cluster_threshold=0.9)
###Output
_____no_output_____
###Markdown
Note that some explainers use a clustering structure during the explanation process. They do this both to avoid perturbing features in unrealistic ways while explaining a model, and for the sake of computational performance. When you compute SHAP explanations using these methods they come with a clustering included in the Explanation object. When the bar plot find such a clustering it uses it without you needing to explicitly pass it through the `clustering` parameter:
###Code
# only model agnostic methods support shap.maskers.TabularPartitions right now so we wrap our model as a function
def f(x):
return model.predict(x, output_margin=True)
# define a partition masker that uses our clustering
masker = shap.maskers.Partition(bg, clustering=clustering)
# explain the model again
explainer = shap.Explainer(f, masker)
shap_values_partition = explainer(X[:100])
shap.plots.bar(shap_values_partition)
shap.plots.bar(shap_values_partition, cluster_threshold=2)
shap.plots.bar(shap_values_partition[0], cluster_threshold=2)
###Output
_____no_output_____
###Markdown
Documentation by example for `shap.plots.bar`This notebook is designed to demonstrate (and so document) how to use the `shap.plots.bar` function. It uses an XGBoost model trained on the classic UCI adult income dataset (which is classification task to predict if people made over 50k in the 90s). Warning! This notebook documents the new SHAP API, and that API is still stablizing over the coming weeks.
###Code
import xgboost
import shap
# train XGBoost model
X,y = shap.datasets.adult()
model = xgboost.XGBClassifier(nestimators=100, max_depth=2).fit(X, y)
# compute SHAP values
explainer = shap.Explainer(model, X)
shap_values = explainer(X)
###Output
_____no_output_____
###Markdown
Global bar plotPassing a matrix of SHAP values to the bar plot function creates a global feature importance plot, where the global importance of each feature is taken to be the mean absolute value for that feature over all the given samples.
###Code
shap.plots.bar(shap_values)
###Output
_____no_output_____
###Markdown
By default the bar plot only shows a maximum of ten bars, but this can be controlled with the `max_display` parameter:
###Code
shap.plots.bar(shap_values, max_display=12)
###Output
_____no_output_____
###Markdown
Local bar plotPassing a row of SHAP values to the bar plot function creates a local feature importance plot, where the bars are the SHAP values for each feature. Note that the feature values are show in gray to the left of the feature names.
###Code
shap.plots.bar(shap_values[0])
###Output
_____no_output_____
###Markdown
Using feature clusteringOften features in datasets are partially or fully redundant with each other. Where redudant means that a model could use either feature and still get same accuracy. To find these features practitioners will often compute correlation matrices among the features, or use some type of clustering method. When working with SHAP we recommend a more direct approach that measures feature redundancy through model loss comparisions. The `shap.utils.hclust` method can do this and build a hierarchical clustering of the feature by training XGBoost models to predict the outcome for each pair of input features. For typical tabular dataset this results in much more accurate measures of feature redundancy than you would get from unsupervised methods like correlation.Once we compute such a clustering we can then pass it to the bar plot so we can simultainously visualize both the feature redundancy structure and the feature importances. Note that by default we don't show all of the clustering structure, but only the parts of the clustering with distance < 0.5. Distance in the clustering is assumed to be scaled roughly between 0 and 1, where 0 distance means the features perfectly redundant and 1 means they are completely independent. In the plot below we see that only relationship and marital status have more that 50% redundany, so they are the only features grouped in the bar plot:
###Code
clustering = shap.utils.hclust(X, y) # by default this trains (X.shape[1] choose 2) 2-feature XGBoost models
shap.plots.bar(shap_values, clustering=clustering)
###Output
_____no_output_____
###Markdown
If we want to see more of the clustering structure we can adjust the `cluster_threshold` parameter from 0.5 to 0.9. Note that as we increase the threshold we constrain the ordering of the features to follow valid cluster leaf orderings. The bar plot sorts each cluster and sub-cluster feature importance values in that cluster in an attempt to put the most important features at the top.
###Code
shap.plots.bar(shap_values, clustering=clustering, cluster_threshold=0.9)
###Output
_____no_output_____
###Markdown
Note that some explainers use a clustering structure during the explanation process. They do this both to avoid perturbing features in unrealistic ways while explaining a model, and for the sake of computational performance. When you compute SHAP explanations using these methods they come with a clustering included in the Explanation object. When the bar plot find such a clustering it uses it without you needing to explicitly pass it through the `clustering` parameter:
###Code
# only model agnostic methods support shap.maskers.TabularPartitions right now so we wrap our model as a function
def f(x):
return model.predict(x, output_margin=True)
# define a partition masker that uses our clustering
masker = shap.maskers.Partition(X, clustering=clustering)
# explain the model again
explainer = shap.Explainer(f, masker)
shap_values_partition = explainer(X[:100])
shap.plots.bar(shap_values_partition)
shap.plots.bar(shap_values_partition, cluster_threshold=2)
shap.plots.bar(shap_values_partition[0], cluster_threshold=2)
###Output
_____no_output_____ |
time_series_data_analysis.ipynb | ###Markdown
SHARP Time Series Data Analysis BackgroundOver the summer, I have been working on analyzing SHARP time series data to understand what conditions lead to solar flares, and why this occurs on a physical level. There is much previous literature about flare prediction, but much of this literature fails to interpret the results in a physically meaningful manner. Furthermore, a time series approach has not been taken before to study this problem.The magnetic time series data used in this notebook is taken from Helioseismic and Magnetic Imager (HMI) instrument on NASA's Solar Dynamics Observatory (SDO) satellite, which takes magnetic images of the sun at a 12 minute cadence. From these data, [SHARP variables](http://jsoc.stanford.edu/doc/data/hmi/sharp/sharp.htm) are extracted that describe magnetic conditions on the sun. Flaring data for the sun is provided by the [NOAA GOES](https://www.swpc.noaa.gov/products/goes-x-ray-flux) database, which keeps track of soft x-ray flux on the sun, which is the metric used to determine whether an active region has flared. ---First let's import the general utility modules that we will need:
###Code
import csv
import json
import requests
import math
import random
from datetime import datetime
###Output
_____no_output_____
###Markdown
And load scientific packages [scipy](http://www.scipy.org), [sunpy](https://sunpy.org), and [numpy](https://www.numpy.org).
###Code
import scipy.stats
import sunpy
import sunpy.instr.goes
import numpy as np
###Output
_____no_output_____
###Markdown
--- Downloading DataJSOC (Joint Science Operations Center) keeps an up-to-date catalog of all the active regions observed on the sun. This can be found here: `http://jsoc.stanford.edu/doc/data/hmi/harpnum_to_noaa/all_harps_with_noaa_ars.txt`. The code block below takes a file `./data/all_harps_with_noaa_ars.txt` (which is a downloaded version of the aforementioned link) and extracts the harp_ids, as well as a dictionary of harp_ids corresponding to noaa_ids.To download the newest version of the file, one could use a tool such as `wget`: `wget http://jsoc.stanford.edu/doc/data/hmi/harpnum_to_noaa/all_harps_with_noaa_ars.txt`I will first list the functions for downloading data, then have a cell that runs the functions and saves the relevant data output to variables that are accessible in other methods. Here are the functions:
###Code
def get_harp_ids_and_harp_noaa_dict(filename='./data/all_harps_with_noaa_ars.txt'):
'''This method requires there to be a file filename with two columns: HARP IDs
and NOAA IDs. This method returns a list of HARP IDs and a dictionary of HARP
IDs corresponding to a list of NOAA IDs.
'''
harp_ids = []
harp_noaa_dict = {}
with open(filename) as f:
content = f.readlines()[1:] # Disregard the header line
for line in content:
harp_id = line.split()[0]
noaa_ids = line.split()[1].split(',')
harp_ids.append(int(harp_id))
harp_noaa_dict[int(harp_id)] = noaa_ids
return harp_ids, harp_noaa_dict
###Output
_____no_output_____
###Markdown
These are the variables that we will query from the HMI database:
###Code
QUERY_VARIABLES = ('T_REC,USFLUX,MEANGAM,MEANGBT,MEANGBZ,MEANGBH,MEANJZD,TOTUSJZ,MEANJZH,'
'TOTUSJH,ABSNJZH,SAVNCPP,MEANPOT,TOTPOT,MEANSHR,SHRGT45,R_VALUE,AREA_ACR'
)
import pandas
def query_data(harp_id):
'''This method grabs data from the JSOC database. It queries four variables: time,
unsigned flux, polarity inversion line flux, and area. This method also makes sure
that the data received is high-quality and accurate.
'''
url_base = 'http://jsoc.stanford.edu/cgi-bin/ajax/jsoc_info?ds=hmi.sharp_cea_720s'
harp_id_string = '[' + str(harp_id) + ']'
param_string = '[? (abs(OBS_VR)< 3500) and (QUALITY<65536) ?]'
keys_string = '&op=rs_list&key=' + QUERY_VARIABLES + ',CRVAL1,CRLN_OBS'
url = url_base + harp_id_string + param_string + keys_string
r = requests.get(url)
assert r.status_code == 200
data = json.loads(r.text)
keys = pandas.DataFrame()
for keyword_data in data['keywords']:
keyword = keyword_data['name']
vals = keyword_data['values']
keys[keyword] = vals
return keys
def convert_tai_to_datetime(t_str):
'''Helper method to convert a JSOC T_REC object into a python datetime object.'''
year = int(t_str[:4])
month = int(t_str[5:7])
day = int(t_str[8:10])
hour = int(t_str[11:13])
minute = int(t_str[14:16])
return datetime(year, month, day, hour, minute)
def convert_datetime_to_tai(t_obj):
'''Helper method to convert a datetime object into a JSOC T_REC object.'''
return str(t_obj.year) + '.' + str(t_obj.month) + '.' + str(t_obj.day) + '_' \
+ str(t_obj.hour) + ':' + str(t_obj.minute) + '_TAI'
def get_time_delta(start_time, end_time):
'''This method returns the time difference between two given datetime objects in
hours.
'''
return (end_time - start_time).total_seconds() / (60 * 60) # Convert to hours
def get_time_data(keys):
'''This method takes a keys object returned from query_data and converts and returns
the time data from keys.T_REC into a list of relative times, such that the first time
is zero and the last time is the range of keys.T_REC in hours.
'''
start_time = convert_tai_to_datetime(keys.T_REC[0])
time_data = []
for i in range(keys.T_REC.size):
time = convert_tai_to_datetime(keys.T_REC[i])
time_data.append(get_time_delta(start_time, time))
return time_data
def create_csv(keys, time_data, harp_id):
'''Given a keys object from query_data, a time_data list, and a harp_id, this method
creates a csv file in ./data/[harp_id].csv with six columns: true time (keys.T_REC),
relative time, unsigned flux, free energy, polarity inversion line flux, and area.
This method will not write any data that occurs outside the range of +/- 70 degrees
longitude from the meridian.
The purpose of this method is to write local data so that it is easy and fast to
access data in the future, since GOES and SHARP data access take a long time, and
querying every test would be inefficient.
'''
data_dir = './data/'
filename = data_dir + str(harp_id) + '.csv'
with open(filename, 'w') as csv_file:
writer = csv.writer(csv_file, delimiter=',', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['TRUE_TIME', 'TIME'] + QUERY_VARIABLES.split(',')[1:])
for i in range(len(keys.USFLUX)):
if abs(float(keys.CRVAL1[i]) - float(keys.CRLN_OBS[i])) < 70.0:
writer.writerow([keys.T_REC[i], time_data[i], keys.USFLUX[i],
keys.MEANGAM[i], keys.MEANGBT[i], keys.MEANGBZ[i],
keys.MEANGBH[i], keys.MEANJZD[i], keys.TOTUSJZ[i],
keys.MEANJZH[i], keys.TOTUSJH[i], keys.ABSNJZH[i],
keys.SAVNCPP[i], keys.MEANPOT[i], keys.TOTPOT[i],
keys.MEANSHR[i], keys.SHRGT45[i], keys.R_VALUE[i],
keys.AREA_ACR[i]])
def create_all_csvs(harp_ids):
'''This method creates a csv file with time and unsigned flux for all the HARP IDs
in the inputted harp_ids.
'''
count = 0
for harp_id in harp_ids:
count += 1
print(count, harp_id)
if count % 100 == 0: print(count)
keys = query_data(harp_id)
time_data = get_time_data(keys)
create_csv(keys, time_data, harp_id)
def read_data(harp_id):
'''This method reads the data from ./data/[harp_id].csv, and returns a pandas
DataFrame with two columns: time since the beginning of the active region data,
and unsigned flux.
'''
filename = './data/' + str(harp_id) + '.csv'
df = pandas.read_csv(filename)
df.TRUE_TIME = df.TRUE_TIME.map(convert_tai_to_datetime)
for i, row in df.iterrows():
if 'MISSING' in row.values:
df = df.drop(i)
df = df.reset_index()
return df
def get_flare_data_from_database(t_start, t_end, min_event):
'''This helper method accesses data from the GOES database. It returns
the metadata associated with each flaring active region greater in event
size than min_event and between time t_start and t_end.
'''
time_range = sunpy.time.TimeRange(t_start, t_end)
results = sunpy.instr.goes.get_goes_event_list(time_range, min_event)
return results
def get_flare_data(harp_ids, min_event):
'''This method accesses the GOES database to get the flare data for the maximum
time range of the inputted harp ids.
'''
first_keys = query_data(harp_ids[0])
t_start = first_keys.T_REC[0]
last_keys = query_data(harp_ids[-1])
t_end = last_keys.T_REC[len(last_keys.T_REC) - 1]
print('Time range:', t_start, 'to', t_end)
return get_flare_data_from_database(t_start, t_end, min_event)
def write_noaa_data_to_csv(flare_data):
'''This method writes the NOAA flare data to "./data/noaa_data.csv". This makes
loading the flaring data fast for future runs.
'''
with open('./data/noaa_data.csv', 'w') as csv_file:
field_names = flare_data[0].keys()
writer = csv.DictWriter(csv_file, fieldnames=field_names)
writer.writeheader()
for flare in flare_data:
writer.writerow(flare)
def get_noaa_data_from_csv():
'''This method loads the NOAA data from "./data/noaa_data.csv".'''
noaa_flare_set = []
with open('./data/noaa_data.csv', 'r') as csv_file:
reader = csv.DictReader(csv_file)
for row in reader:
noaa_flare_set.append(dict(row))
return noaa_flare_set
###Output
_____no_output_____
###Markdown
Now we will run the code from the functions above to create `harp_ids`, `harp_noaa_dict`, and `flare_data`:- `harp_ids`: a list of all HARP IDs- `harp_noaa_dict`: a dictionary mapping the HARP IDs to the NOAA IDs- `flare_data`: the flare data downloaded from GOES
###Code
# Set recreate_data to True if you want to redownload all the data (takes 30+ minutes)
recreate_data = False
harp_ids, harp_noaa_dict = get_harp_ids_and_harp_noaa_dict()
if recreate_data:
create_all_csvs(harp_ids)
flare_data = get_flare_data(harp_ids, 'C1.0')
write_noaa_data_to_csv(flare_data)
else:
flare_data = get_noaa_data_from_csv()
print('Number of active regions:', len(harp_ids))
print('Number of flares:', len(flare_data))
###Output
Number of active regions: 1335
Number of flares: 8029
###Markdown
--- Data ProcessingIn the next blocks of code, we will process the data in various ways to extract information and relate the data described above to each other.
###Code
def get_flared_noaa_id_set(flare_data):
'''This method returns a list of all the NOAA IDs that have flared, based
on the data passed in from flare_data.
'''
noaa_flare_set = set()
for flare in flare_data:
noaa_flare_set.add(int(flare['noaa_active_region']))
return noaa_flare_set
def has_flared(harp_id, harp_noaa_dict, noaa_flare_set):
'''This method returns a boolean corresponding to whether the active region
corresponding to the harp_id has flared or not within its lifespan.
'''
for noaa_id in harp_noaa_dict[harp_id]:
if int(noaa_id) in noaa_flare_set:
return True
return False
def get_harp_id_to_flaring_times_dict(harp_ids, harp_noaa_dict, flare_data):
'''This method returns a dictionary where the keys are HARP IDs and
the values are a list of peak times where the given active region flared.
Times are given in units of hours after the first time in the harp_id data.
If the active region corresponding to the HARP IDs did not flare, then
the list will be empty.
'''
# Make a dictionary of NOAA ids as keys and flare times as values
noaa_id_flare_time_dict = {}
for flare in flare_data:
time = flare['peak_time']
noaa_id = int(flare['noaa_active_region'])
if noaa_id in noaa_id_flare_time_dict.keys():
noaa_id_flare_time_dict[noaa_id] += [time]
else:
noaa_id_flare_time_dict[noaa_id] = [time]
# Make a dictionary with HARP ids as keys and flare times as values
flare_time_dict = {}
noaa_ids = noaa_id_flare_time_dict.keys()
for harp_id in harp_ids:
keys = read_data(harp_id)
if len(keys.TRUE_TIME) == 0:
flare_time_dict[harp_id] = []
continue
flare_time_dict[harp_id] = []
datetime_start = keys.TRUE_TIME[0]
hour_start = keys.TIME[0]
for noaa_id in harp_noaa_dict[harp_id]:
if int(noaa_id) not in noaa_ids: continue
time_array = []
for time in noaa_id_flare_time_dict[int(noaa_id)]:
time_array.append(hour_start +
get_time_delta(datetime_start,
convert_tai_to_datetime(str(time))))
flare_time_dict[int(harp_id)] += time_array
return flare_time_dict
def find_unlabeled_flares_above_minimum(flare_data, min_class='M5.0'):
'''While looking at the NOAA data, I noticed that the NOAA ID of some flares
were labeled as 0. This method finds and returns flare_data entries that have
an NOAA ID of 0, and have a GOES class above min_class. This is used to see
if any of the unlabeled flares interfere with the learning algorithm.
'''
unlabeled_flare_list = []
for flare in flare_data:
if flare['noaa_active_region'] == '0':
goes_class = flare['goes_class']
classes = ['c', 'm', 'x']
if (
classes.index(goes_class[0].lower()) > classes.index(min_class[0].lower()) or
(classes.index(goes_class[0].lower()) == classes.index(min_class[0].lower()) and
float(goes_class[1:]) > float(min_class[1:]))
):
unlabeled_flare_list.append(flare)
return unlabeled_flare_list
def count_flared_num(harp_ids, harp_noaa_dict, noaa_flare_set):
'''This method returns the number of active regions in the inputted
harp_ids that have flared.
'''
number_flared = 0
for harp_id in harp_ids:
if has_flared(harp_id, harp_noaa_dict, noaa_flare_set): number_flared += 1
return number_flared
def get_segmented_data(harp_ids, flare_data, flare_time_dict, n=None,
return_harp_ids=False, num_hours=24):
'''This method returns two arrays: x and y. The x array includes time series
data, while y represents whether the corresponding active region in x flared.
The x and y arrays are built according to the following rule:
- If a flare occurs within num_hours hours after sample time t, it is
considered to belong to the positive case (i.e. the corresponding y entry
will be True).
- If no flare occurs within num_hours hours, it is considered to belong to
the negative case.
The x array is an array of arrays, where each array represents a num_hours-hour
set of data corresponding to an active region. Each of these num_hours-hour
arrays are arrays of dictionaries representing the data at each recorded interval
within the num_hours hours.
The n parameter refers to how many negative data points. If n is set to
None (default), then the number of negative data points = the number of positive
data points.
'''
num_flares = len(flare_data)
if n:
num_samples_per_datapoint = int(20 * n / num_flares)
else:
n = len(flare_data) * 5 # Pick a large number
num_samples_per_datapoint = 10 # Number of negative samples from each region
def get_data_point(keys, flare_time):
'''Given the keys data and a flare time, returns a dictionary with SHARP
variables as keys, mapping each to the values corresponding to the harp_id.
The data is given for all data points num_hours before the flare_time.
'''
data_point = []
for i, time in enumerate(keys.TIME):
if time <= flare_time and time >= flare_time - num_hours:
data_point.append(keys.iloc[i])
if not data_point or data_point[-1]['TIME'] - data_point[0]['TIME'] < num_hours - 1:
return None
return data_point
def contains_nonflaring_24hrs(time_data, flare_data):
'''Given flaring data flare_data for an active region, returns True if the
flare_data contains a 24 hour period without flares, and False otherwise.
'''
previous_flare_time = time_data[0]
for flare_time in flare_data + time_data[0]:
if flare_time - previous_flare_time > num_hours:
return True
previous_flare_time = flare_time
return False
def get_random_flare_time(time_data, flare_data):
'''Returns a random valid flare time for the given time_data and flare_data.
This method ensures that there is no flaring in the num_hours before the
returned flare time.
'''
c = 0
while True:
c += 1
is_valid_before, does_flare = False, False
end_time = time_data[random.randrange(len(time_data))]
for flare_time in flare_data + [time_data[0]]:
if end_time - flare_time > num_hours: is_valid_before = True
if abs(end_time - flare_time) < num_hours: does_flare = True
if is_valid_before and not does_flare: break
if c > 200: return None
return end_time
x_data = []
y_data = []
harp_list = []
num_negative = 0
for harp_id in harp_ids:
keys = read_data(harp_id)
flare_data = flare_time_dict[harp_id]
if not flare_data: continue
# Positive samples
for flare_time in flare_data:
# Throw out flare data with less than num_hours hours of preceding data or
# data that has flare outside of the dataset since the data was cleaned in
# the downloading data section.
if flare_time - keys.TIME[0] < num_hours or flare_time > keys.TIME.iloc[-1]: continue
data_point = get_data_point(keys, flare_time)
if data_point:
harp_list.append(harp_id)
x_data.append(data_point)
y_data.append(True) # True => flare is present
# Negative samples
if num_negative >= n: continue
for _ in range(num_samples_per_datapoint):
if not contains_nonflaring_24hrs(keys.TIME, flare_data): break
flare_time = get_random_flare_time(keys.TIME, flare_data)
if not flare_time: break
data_point = get_data_point(keys, flare_time)
if not data_point: break
harp_list.append(harp_id)
x_data.append(data_point)
y_data.append(False) # False => flare is not present
num_negative += 1
if return_harp_ids:
return x_data, y_data, harp_list
else:
return x_data, y_data
flare_time_dict = get_harp_id_to_flaring_times_dict(harp_ids, harp_noaa_dict, flare_data)
seg_x, seg_y, harp_list = get_segmented_data(harp_ids, flare_data, flare_time_dict,
n=4500, return_harp_ids=True)
positive_count, negative_count = 0, 0
for has_flare in seg_y:
if has_flare: positive_count += 1
else: negative_count += 1
print('# Positive:', positive_count, '--- # Negative:', negative_count)
###Output
# Positive: 4550 --- # Negative: 4503
###Markdown
Let's print the first couple terms of the first element of `seg_x` to get a good understanding of what the data looks like:
###Code
print(seg_x[0][0:2])
###Output
[index 121
TRUE_TIME 2010-05-03 16:36:00
TIME 26
USFLUX 5.217e+20
MEANGAM 30.896
MEANGBT 118.462
MEANGBZ 119.494
MEANGBH 45.12
MEANJZD -0.0636397
TOTUSJZ 7.49646e+11
MEANJZH -0.00472015
TOTUSJH 33.182
ABSNJZH 3.861
SAVNCPP 1.14898e+11
MEANPOT 2051.63
TOTPOT 2.22877e+21
MEANSHR 24.309
SHRGT45 5.501
R_VALUE 2.358
AREA_ACR 19.6372
Name: 121, dtype: object, index 122
TRUE_TIME 2010-05-03 16:48:00
TIME 26.2
USFLUX 5.15725e+20
MEANGAM 34.583
MEANGBT 113.826
MEANGBZ 116.839
MEANGBH 45.51
MEANJZD -0.171338
TOTUSJZ 7.10778e+11
MEANJZH -0.00544223
TOTUSJH 30.977
ABSNJZH 4.577
SAVNCPP 1.76321e+11
MEANPOT 2167.05
TOTPOT 2.42035e+21
MEANSHR 26.45
SHRGT45 7.729
R_VALUE 2.363
AREA_ACR 18.2257
Name: 122, dtype: object]
###Markdown
--- Plotting Variables over TimeIt is useful to create graphs in order to visually understand the relationship between variables over time.Below are many methods for creating different types of graphs. Many of the functions are flexible, allowing one to manipulate the graphs.First, let's import `matplotlib` methods useful for graphing:
###Code
import matplotlib
import matplotlib.pyplot as plt
def plot_graph(x, y, x_label, y_label, title, clr=None, scatter=False,
line=None, vertical_lines=None, formula=None, label=None):
'''This method uses matplotlib to create a graph of x vs. y with many different
parameters to customize the graph. This method is a base method for many of the
other graphing methods.
'''
# Style elements
text_style = dict(fontsize=12, fontdict={'family': 'monospace'})
# Add data to graph
if scatter:
plt.scatter(x, y, color=clr, label=label, alpha=0.8, s=5)
else:
plot = plt.plot(x, y, '.', color=clr, linestyle=line, label=label)
if vertical_lines:
for x_val in vertical_lines:
plt.axvline(x=x_val, color=clr)
plt.axhline(y=0, color='black', linewidth=1)
if formula:
x_vals = np.array(x)
y_vals = formula(x_vals)
plt.plot(x, y_vals, color=clr)
# Label the axes and the plot
ax = plt.gca()
ax.tick_params(labelsize=12)
ax.set_xlabel(x_label, **text_style)
ax.set_ylabel(y_label, **text_style)
ax.set_title(title, **text_style)
if label: plt.legend()
def plot_segmented_graphs(seg_x, seg_y, variables=['US_FLUX'], flare=True, n=5,
color=None, delta=True, scale=False):
'''This method plots n random graphs that correspond to flaring active regions
if flare is True, and non-flaring active regions if flare is False.
If delta is True, it normalizes the graph (variables at time=0 are set to 0).
If scale is True, it normalizes the graph to be in the range [-1, 1].
'''
for _ in range(n):
i = random.randrange(len(seg_y))
while seg_y[i] != flare:
i = random.randrange(len(seg_y))
seg_data = seg_x[i]
for variable in variables:
x_data, y_data = [], []
start_data = seg_data[0][variable]
var_data = []
for data_pt in seg_data:
var_data.append(data_pt[variable])
if delta:
max_data = max(max(var_data - start_data),
abs(min(var_data - start_data))) / 1e22
else:
max_data = max(max(var_data), abs(min(var_data))) / 1e22
for data_pt in seg_data:
x_data.append(data_pt['TIME'])
y_pt = data_pt[variable] / 1e22
if delta:
y_pt -= start_data / 1e22
if scale:
y_pt /= max_data
y_data.append(y_pt)
variable_names = map(lambda x : x.title().replace('_', ' '), variables)
plot_graph(x_data, y_data, 'Hours Since Active Region Detected',
'Units relative to maximum value',
', '.join(variable_names) + ' vs. Time for Active Region',
clr=color, label=variable)
plt.show()
num_graphs = 2
plot_segmented_graphs(seg_x, seg_y, scale=True, flare=False, n=num_graphs,
variables=['USFLUX', 'TOTPOT', 'AREA_ACR', 'R_VALUE'])
###Output
_____no_output_____
###Markdown
--- Machine Learning (from [Wikipedia](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)): Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
###Code
def calculate_spearman(seg_x, first_var, second_var):
'''Calculate the Spearman coefficient between two variables. This method calculates
the coefficient between the two variables for every time series data series, then
returns the mean and standard deviation of the coefficients.
'''
s_coeff_list = []
for data in seg_x:
first_var_data = []
second_var_data = []
for data_pt in data:
if not data_pt[first_var] or not data_pt[second_var]: continue
first_var_data.append(data_pt[first_var])
second_var_data.append(data_pt[second_var])
s_coeff = scipy.stats.spearmanr(first_var_data, second_var_data).correlation
if not math.isnan(s_coeff): s_coeff_list.append(s_coeff)
return np.mean(s_coeff_list), np.std(s_coeff_list)
for var in ['TOTPOT', 'AREA_ACR']:
s_coeff, s_dev = calculate_spearman(seg_x, 'USFLUX', var)
print('S_coefficient for flux vs.', var + '. mean:', s_coeff, ' std:', s_dev)
def regression_helper(function, time_data, variable_data):
popt, _ = scipy.optimize.curve_fit(function, time_data, variable_data)
residuals = variable_data - function(time_data, *popt)
ss_res = np.sum(residuals ** 2)
ss_tot = np.sum((variable_data - np.mean(variable_data)) ** 2)
r_squared = 1 - (ss_res / ss_tot)
return popt, r_squared
###Output
_____no_output_____
###Markdown
The following methods take `time_data` and some `variable_data`, then return different kinds of features based on the data.
###Code
def linear_features(time_data, variable_data, feature_names=False):
def f_linear(x, a, b):
return a * x + b
popt, r_squared = regression_helper(f_linear, time_data, variable_data)
if feature_names:
return np.array([*popt, r_squared]), ['slope', 'intercept', 'r^2_linear']
return np.array([*popt, r_squared])
def exponential_features(time_data, variable_data, feature_names=False):
def f_exponential(x, a, b):
return a * b ** x
popt, r_squared = regression_helper(f_exponential, time_data, variable_data)
if feature_names:
return np.array([popt[1], r_squared]), ['exp_val', 'r^2_exp']
return np.array([popt[1], r_squared])
def quadratic_features(time_data, variable_data, feature_names=False):
def f_quad(x, a, b, c):
return a * x ** 2 + b * x + c
popt, r_squared = regression_helper(f_quad, time_data, variable_data)
if feature_names:
return np.array([*popt, r_squared]), ['quad_1', 'quad_2', 'quad_3', 'r^2_quad']
return np.array([*popt, r_squared])
def cubic_features(time_data, variable_data, feature_names=False):
def f_cubic(x, a, b, c, d):
return a * x ** 3 + b * x ** 2 + c * x + d
popt, r_squared = regression_helper(f_cubic, time_data, variable_data)
if feature_names:
return np.array([*popt, r_squared]), ['cube_1', 'cube_2', 'cube_3', 'cube_4', 'r^2_cube']
return np.array([*popt, r_squared])
from scipy.interpolate import make_lsq_spline
from scipy.interpolate import CubicSpline
def spline_features(time_data, variable_data, feature_names=False):
elapsed_time = time_data[-1] - time_data[0]
t = [time_data[0] + elapsed_time / 4, time_data[0] + elapsed_time * 2 / 4,
time_data[0] + elapsed_time * 3 / 4]
k = 3
t = np.r_[(time_data[0],)*(k+1), t, (time_data[-1],)*(k+1)]
try:
formula = make_lsq_spline(time_data, variable_data, t, k)
except np.linalg.LinAlgError: # Not enough time data in each quadrant of the data
if feature_names: return None, None
return None
if feature_names:
return np.array(formula.c.flatten()), ['spline_1', 'spline_2', 'spline_3', 'spline_4',
'spline_5', 'spline_6', 'spline_7']
return np.array(formula.c.flatten())
def discrete_features(time_data, variable_data, feature_names=False):
features = []
features.append(np.mean(variable_data))
features.append(np.std(variable_data))
if feature_names:
return features, ['mean', 'std']
def extract_time_series_features(time_data, variable_data, features):
feature_list = np.array([])
feature_names = []
for feature in features:
# Each feature is a function
data, names = feature(time_data, variable_data, feature_names=True)
if data is None or not any(data): return [], []
feature_list = np.append(feature_list, data)
feature_names += names
return feature_list, feature_names
def create_learning_dataset(seg_x, seg_y, variable, features):
'''Creates learning dataset with time series data.
'''
x_data, y_data = [], []
for i, data in enumerate(seg_x):
if len(data) < 4: continue
time_data, variable_data = [], []
for data_pt in data:
time_data.append(data_pt['TIME'])
if variable in ['USFLUX', 'TOTPOT']:
variable_data.append(data_pt[variable] / 1e22)
else:
variable_data.append(data_pt[variable])
time_data = np.array(time_data)
variable_data = np.array(variable_data)
if not any(variable_data): continue
series_data, names = extract_time_series_features(time_data, variable_data, features)
if not any(series_data): continue
x_data.append(series_data)
y_data.append(seg_y[i])
names = list(map(lambda x : variable + ' ' + x, names))
return x_data, y_data, names
features = [linear_features]
raw_x_data = np.array([])
y_data = []
feature_names = []
variables = ['USFLUX']
for variable in variables:
x, y, names = create_learning_dataset(seg_x, seg_y, variable, features)
feature_names += names
if raw_x_data.size == 0: raw_x_data = np.array(x)
else: raw_x_data = np.hstack((raw_x_data, np.array(x)))
y_data = y
print('Features used:', feature_names)
from sklearn.preprocessing import MinMaxScaler
def scale_x_data(x):
'''Method to scale each feature in the inputted x data to a range of 0 to 1.
Returns the scaled data.
'''
scaler = MinMaxScaler()
return scaler.fit_transform(x)
x_data = scale_x_data(raw_x_data)
print(len(x_data), len(y_data))
###Output
9053 9053
###Markdown
The following two methods are helper functions to help run machine learning algorithms.
###Code
from sklearn.model_selection import train_test_split
def fit_algorithm(clf, x, y, n=1):
'''This method will fit the given classifier clf to the input x, y data
and will return the training and test accuracy of the model.
This method will randomize the train/test split n number of times and will
return the average train/test accuracy.
'''
avg_train, avg_test = 0, 0
for _ in range(n):
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)
clf.fit(x_train, y_train)
avg_train += clf.score(x_train, y_train)
avg_test += clf.score(x_test, y_test)
return avg_train / n, avg_test / n
def print_info(clf, x, y, algorithm_name, best_accuracy=0, best_algorithm=None):
'''This method streamlines the code required to fit the given clf to the
model, as well as print out important statistics. This method returns the new
best algorithm and best accuracy, based on the test accuracy.
'''
print(algorithm_name + ':')
train_accuracy, test_accuracy = fit_algorithm(clf, x, y, 50)
print('> Train accuracy:', train_accuracy)
print('> Test accuracy:', test_accuracy)
result_vals_dict = {'TP': 0, 'FP': 0, 'TN': 0, 'FN':0}
for i, data_pt in enumerate(x):
prediction = clf.predict([data_pt])
if prediction == y[i]:
if prediction == 1:
result_vals_dict['TP'] += 1
else:
result_vals_dict['TN'] += 1
else:
if prediction == 1:
result_vals_dict['FP'] += 1
else:
result_vals_dict['FN'] += 1
precision = result_vals_dict['TP'] / (result_vals_dict['TP'] + result_vals_dict['FP'] + 1)
recall = result_vals_dict['TP'] / (result_vals_dict['TP'] + result_vals_dict['FN'] + 1)
tss_score = recall - result_vals_dict['FP'] / (result_vals_dict['FP'] + result_vals_dict['TN'])
print('> Precision:', precision)
print('> Recall:', recall)
print('> TSS Score:', tss_score)
if test_accuracy > best_accuracy:
best_accuracy = test_accuracy
best_algorithm = algorithm_name
return best_algorithm, best_accuracy
###Output
_____no_output_____
###Markdown
Different Classification Algorithms and Their Pros and Cons1. Suppport Vector Machines (SVMs) * SVMs work by constructing hyper-planes in higher dimensional space. This can be used for classification by maximizing the distance between the hyper-plane and the training data of any class. * This is a good choice because it is a versatile classification algorithm.2. Stochastic Gradient Descent * Creates a linear classifier to minimize loss. * Less versatile than SVMs (this should not be an issue for the binary classification, however). * Scikitlearn has the following built-in loss functions: hinge loss, modified Huber, and logistic.3. Multi-layer Perceptron * Can learn non-linear models. * Doesn't necessarily find global optimum: different initial weights can alter validation accuracy. * Needs tweaking of hyperparameters such as the number of hidden neurons, layers, and iterations to work well.4. AdaBoost (Boosting algorithm) * Principle is to combine many weak learners to create one strong model. * Each weak learner concentrates on the examples that are missed by the previous learners.5. Random Forest * Each tree is built from a random sample of the total data (with replacement). * This tends to reduce the overall bias. Let's import all the learning algorithms we need from the scikit learn library:
###Code
from sklearn.svm import SVC
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
def run_learning_algorithms(x, y):
'''This method runs different machine learning (ML) algorithms and prints
statements indicated the accuracy, finally printing the best overall algorithm
in terms of test accuracy.
Current ML algorithms:
Support Vector Machine
Stochastic Gradient Descent
Multi-layer Perceptron
AdaBoost
Random Forest
'''
best_accuracy = 0
best_algorithm = None
#algorithm_name = 'Support Vector Machine'
#clf = SVC(gamma='scale')
#best_algorithm, best_accuracy = print_info(clf, x, y, algorithm_name, best_accuracy, best_algorithm)
#print('>', clf.support_vectors_, '\n')
algorithm_name = 'Stochastic Gradient Descent'
clf = SGDClassifier(loss='hinge', penalty='l2')
best_algorithm, best_accuracy = print_info(clf, x, y, algorithm_name, best_accuracy, best_algorithm)
print('>', clf.coef_, '\n')
#algorithm_name = 'Multi-layer Perceptron'
#clf = MLPClassifier(max_iter=500)
#best_algorithm, best_accuracy = print_info(clf, x, y, algorithm_name, best_accuracy, best_algorithm)
#print('>', clf.loss_, '\n')
algorithm_name = 'AdaBoost'
clf = AdaBoostClassifier(n_estimators=25, random_state=0)
best_algorithm, best_accuracy = print_info(clf, x, y, algorithm_name, best_accuracy, best_algorithm)
print('>', clf.feature_importances_, '\n')
algorithm_name = 'Random Forest'
clf = RandomForestClassifier(n_estimators=25, max_depth=2, random_state=0)
best_algorithm, best_accuracy = print_info(clf, x, y, algorithm_name, best_accuracy, best_algorithm)
print('>', clf.feature_importances_, '\n')
print('The best algorithm is', best_algorithm, 'with a test accuracy of', best_accuracy)
run_learning_algorithms(x_data, y_data)
def graph_features(x, y, feature_names, max_num_graphs=float('inf')):
'''Given the feature data as x, this function will graph features versus each other.
Different outputs in y will be displayed in different colors. The function will graph
every combination of features, and print them.
'''
single_feature_vectors = [[] for _ in range(len(x[0]))]
colors = []
color_map = {True: 'r', False: 'b'}
for i, data_pt in enumerate(x):
colors.append(color_map[y[i]])
for j in range(len(data_pt)):
single_feature_vectors[j].append(data_pt[j])
count = 0
for i in range(len(x[0])):
for j in range(i + 1, len(x[0])):
count += 1
plot_graph(single_feature_vectors[i], single_feature_vectors[j],
feature_names[i], feature_names[j],
feature_names[i] + ' vs. ' + feature_names[j],
clr=colors, scatter=True)
plt.show()
if count >= max_num_graphs: break
if count >= max_num_graphs: break
graph_features(x_data, y_data, feature_names)
###Output
_____no_output_____
###Markdown
--- Plotting Metadata In this section, I will include a few functions for graphing the results outputted from the machine learning modeling. Specifically, there is a method to understand the relationship between lag time and accuracy and a method to understand the importance of the coefficients in the models as lag time changes.The following two functions are used to get run the algorithms to get the data ready to be plotted and analyzed:
###Code
def lag_vs_accuracy_data(harp_ids, flare_time_dict, seg_x, seg_y, hour_range=range(2, 25),
ada=False, tss=False, features=[spline_features],
variables=['USFLUX', 'TOTPOT', 'AREA_ACR', 'R_VALUE']):
'''This function outputs lag time vs coefficient data in the form of a dictionary.
The dictionary keys are the variables in the variables parameter, and the values are
a list of three-tuples (lag time, accuracy, accuracy error) for all lag times in the
hour_range parameter. Note: the model is trained on a single variable with the learning
algorithm, so there will be len(variables) separate data series.
This function normalizes the data before learning by ensuring that the data at the
beginning of the time series is set to zero. This makes sure that the algorithm learns
on time series instead of discrete features.
By default, the function will return accuracy data (i.e. accuracy over time). If tss is
set to true, it will return TSS data instead of accuracy data.
The default model used is stochastic gradient descent. If the ada parameter is set to
True, then an AdaBoost model will be used instead.
This function takes harp_ids, flare_time_dict, seg_x, and seg_y as inputs.
Note: The default range does not include hour 1. This is by design: for many of the
fitting features such as spline_features and cubic_features, it does not make sense to
fit on one hour (i.e. 5 data points) of data.
'''
data_dict = {}
for variable in variables: data_dict[variable] = [] # List of (time, accuracy, error)
# Preprocessing to ensure that all the values in new_seg_x are floats (not strings)
new_seg_x = []
for data in seg_x:
def map_to_float(series):
'''Function to map the elements of a series to floats.'''
def to_float(x):
'''Converts x to float unless x is a timestamp.'''
if type(x) is pandas.Timestamp: return x
return float(x)
return series.map(to_float)
new_seg_x.append(list(map(map_to_float, data)))
for lag in hour_range:
modified_seg_x = []
# Convert data into difference data
for data in new_seg_x:
end_time = data[-1]['TIME']
for i, point in enumerate(data):
if end_time - point['TIME'] < lag:
data_tail = data[i:]
data_tail = list(map(lambda x : x - data_tail[0], data_tail))
modified_seg_x.append(data_tail)
break
lag_time = round(modified_seg_x[0][-1]['TIME'] - modified_seg_x[0][0]['TIME'])
for variable in variables:
# Get data ready for model
x, y_data, feature_names = create_learning_dataset(modified_seg_x, seg_y, variable, features)
raw_x_data = np.array(x)
x_data = scale_x_data(raw_x_data)
assert(len(x_data) == len(y_data))
# Run model n times, and take the standard deviation to determine the error
n = 100
if ada: clf = AdaBoostClassifier(n_estimators=25, random_state=0)
else: clf = SGDClassifier(loss='hinge', penalty='l2')
accuracies = []
for _ in range(n):
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.25)
clf.fit(x_train, y_train)
TP, TN, FP, FN = 0, 0, 0, 0
for i, data_pt in enumerate(x_test):
prediction = clf.predict([data_pt])
if prediction == y_test[i]:
if prediction: TP += 1
else: TN += 1
else:
if prediction: FP += 1
else: FN += 1
if tss:
accuracies.append(TP/(TP+FN) - FP/(FP+TN))
else:
accuracies.append((TP + TN)/(TP + TN + FP + FN))
print(np.mean(accuracies))
mean_var = np.mean(accuracies)
var_error = np.std(accuracies)
data_dict[variable].append((lag_time, mean_var, var_error))
return data_dict
###Output
_____no_output_____
###Markdown
We want to create lag vs. accuracy graphs for every SHARP variable (except time; QUERY_VARIABLES includes a `time` variable as its first element):
###Code
print(QUERY_VARIABLES.split(',')[1:])
accuracy_data_dict = lag_vs_accuracy_data(harp_ids, flare_time_dict, seg_x, seg_y,
ada=False, variables=QUERY_VARIABLES.split(',')[1:])
def lag_vs_coefficient_data(harp_ids, flare_time_dict, seg_x, seg_y, hour_range=range(2, 25),
ada=False, features=[spline_features],
f_score=False, variables=['USFLUX']):
'''This function outputs data of lag time vs. coefficient values for a machine learning
fit. This allows one to see how the relative importance of coefficients changes over time.
The function returns two lists: coef_data, which is the values of the coefficients at each
timestep, and time_data, which specifies the timesteps.
This function also has a f_score parameter. When this is set to true, the coefficient data
will be the ANOVA F-value computed for each feature for the data. By default this is false,
and the function returns the parameters of the machine learning fit.
(The paragraphs below are identical to lag_vs_accuracy_data)
The default model used is stochastic gradient descent. If the ada parameter is set to
True, then an AdaBoost model will be used instead.
This function takes harp_ids, flare_time_dict, seg_x, and seg_y as inputs.
Note: The default range does not include hour 1. This is by design: for many of the
fitting features such as spline_features and cubic_features, it does not make sense to
fit on one hour (i.e. 5 data points) of data.
'''
coef_data = {}
time_data = []
for variable in variables: coef_data[variable] = []
# Preprocessing to ensure that all the values in new_seg_x are floats (not strings)
new_seg_x = []
for data in seg_x:
def map_to_float(series):
'''Function to map the elements of a series to floats.'''
def to_float(x):
'''Converts x to float unless x is a timestamp.'''
if type(x) is pandas.Timestamp: return x
return float(x)
return series.map(to_float)
new_seg_x.append(list(map(map_to_float, data)))
for lag in hour_range:
modified_seg_x = []
# Take time off of the start
for data in new_seg_x:
end_time = data[-1]['TIME']
for i, point in enumerate(data):
if end_time - point['TIME'] < lag:
data_tail = data[i:]
data_tail = list(map(lambda x : x - data_tail[0], data_tail))
modified_seg_x.append(data_tail)
break
lag_time = round(modified_seg_x[0][-1]['TIME'] - modified_seg_x[0][0]['TIME'])
time_data.append(lag_time)
for variable in variables:
x, y_data, feature_names = create_learning_dataset(modified_seg_x, seg_y, variable, features)
raw_x_data = np.array(x)
x_data = scale_x_data(raw_x_data)
assert(len(x_data) == len(y_data))
# ANOVA F-value does not depend on a machine learning algorithm, so we can save
# time by not running the ML fit if f_score is True
if f_score:
selector = SelectKBest(f_classif, k='all')
selector.fit(x_data, y_data)
scores = selector.scores_
order = np.argsort(selector.scores_)
ordered_scores = list(map(lambda x : scores[x], order))
coef_data.append(ordered_scores)
continue
# Run model n times, and take the standard deviation to determine the error
n = 10
if ada: clf = AdaBoostClassifier(n_estimators=25, random_state=0)
else: clf = SGDClassifier(loss='hinge', penalty='l2')
coefs = []
for _ in range(n):
_, test_accuracy = fit_algorithm(clf, x_data, y_data, 1)
if ada: coefs.append(clf.feature_importances_)
else: coefs.append(clf.coef_[0])
coef_data[variable].append(sum(coefs) / len(coefs)) # Average coefficients
return coef_data, time_data
coef_data, time_data = lag_vs_coefficient_data(harp_ids, flare_time_dict, seg_x, seg_y,
variables=QUERY_VARIABLES.split(',')[1:])
###Output
_____no_output_____
###Markdown
First, let's import methods from the `bokeh` graphing module that we will use to plot data.
###Code
from bokeh.plotting import figure, show, ColumnDataSource
from bokeh.models import HoverTool, Legend, Band, Range1d
from bokeh.io import output_notebook
output_notebook()
###Output
_____no_output_____
###Markdown
The next functions are used to plot the data:
###Code
# Colors taken from colorbrewer
COLORS = ['#a6cee3', '#1f78b4', '#b2df8a', '#33a02c', '#fb9a99', '#e31a1c', '#fdbf6f',
'#ff7f00', '#cab2d6', '#6a3d9a', '#ffff99', '#b15928', '#8dd3c7', '#fdb462',
'#d9d9d9', '#ffed6f', '#e31a1c']
def plot_variable_data(data_dict, variables=None, parameter='accuracy'):
'''This function plots the variable vs. lag time data of the given input data_dict
using the bokeh plotting library.
If variables is set to None, this function will plot all variables in data_dict. Else
it will plot all the variables in variables.
The parameter input is for labeling the graph. By default it is accuracy, and will
include this word in the title, y axis, and on the tooltips.
'''
variable_data, error_data = {}, {}
time_data, items = [], []
for var in data_dict:
# Parse tuples in data_dict
time_data, variable_subset, error_subset= [], [], []
for tup in data_dict[var]:
time_data.append(tup[0])
variable_subset.append(tup[1])
error_subset.append(tup[2])
variable_data[var] = variable_subset
error_data[var] = error_subset
# Basic plot setup
plot = figure(plot_width=800, plot_height=600, tools='',
toolbar_location=None, title='Lag time vs. ' + parameter,
x_axis_label='Lag time (hours)', y_axis_label=parameter.capitalize())
circles = []
min_val = 1
max_val = 0
for i, var in enumerate(variable_data):
if variables:
if var not in variables: continue
source = ColumnDataSource(data=dict(
x_data = time_data,
y_data = variable_data[var],
))
item = plot.line('x_data', 'y_data', line_width=1, line_alpha=0.5,
color=COLORS[i], source=source)
items.append((var, [item]))
circles.append(plot.circle('x_data', 'y_data', size=10, source=source,
fill_color=COLORS[i], hover_fill_color=COLORS[i],
fill_alpha=0.25, hover_alpha=0.5,
line_color=None, hover_line_color='white'))
# Used for creating error bands
err_xs, err_ys = [], []
for x, y, y_err in zip(time_data, variable_data[var], error_data[var]):
if y + y_err / 2 > max_val: max_val = y + y_err / 2
if y - y_err / 2 < min_val: min_val = y - y_err / 2
err_xs.append((x, x))
err_ys.append((y - y_err / 2, y + y_err / 2))
source = ColumnDataSource({
'base': time_data,
'lower': list(map(lambda x : x[0], err_ys)),
'upper': list(map(lambda x : x[1], err_ys))
})
band = Band(base='base', lower='lower', upper='upper', source=source,
level='underlay', fill_alpha=.5, line_width=1,
line_color='black', fill_color=COLORS[i])
plot.add_layout(band)
plot.add_tools(HoverTool(tooltips=[(parameter.capitalize(), '@y_data %')], renderers=circles,
mode='vline'))
plot.y_range = Range1d(min_val - (max_val - min_val) / 10,
max_val + (max_val - min_val) / 10)
plot.x_range = Range1d(0, 25)
plot.y_range = Range1d(0.45, 0.65)
legend = Legend(items=items)
legend.click_policy='hide'
plot.add_layout(legend, 'right')
plot.title.text_font_size = '16pt'
plot.xaxis.axis_label_text_font_size = "16pt"
plot.yaxis.axis_label_text_font_size = "16pt"
plot.yaxis.axis_label_text_font_size = "16pt"
show(plot)
#plot_variable_data(accuracy_data_dict, variables=['TOTUSJH', 'TOTUSJZ', 'MEANJZD', 'R_VALUE', 'USFLUX', 'TOTPOT'])
def plot_coef_data(coef_data, time_data):
'''This function plots the coefficient data vs. lag time with the bokeh plotting
library. Each coefficient is displayed as a separate color.
'''
coef_data = np.array(coef_data)
transposed_data = coef_data.transpose()
sums = []
for var in coef_data:
sums.append(sum(list(map(lambda x : abs(x), var))) + 0.01)
normalized_data = []
for var in transposed_data:
normalized_data.append([abs(x) / sums[i] for i, x in enumerate(var)])
# Basic plot setup
plot = figure(plot_width=600, plot_height=300, tools='',
toolbar_location=None, title='Lag time vs. feature importances',
x_axis_label='Lag time (hr)', y_axis_label='Importance')
circles = []
items = []
for i, var in enumerate(normalized_data):
source = ColumnDataSource(data=dict(
x_data = time_data,
y_data = var
))
item = plot.line('x_data', 'y_data', line_width=1, color=COLORS[i], source=source)
items.append(('coef ' + str(i + 1), [item]))
circles.append(plot.circle('x_data', 'y_data', size=10, source=source,
fill_color=COLORS[i], hover_fill_color=COLORS[i],
fill_alpha=0.25, hover_alpha=0.5,
line_color=None, hover_line_color='white'))
plot.add_tools(HoverTool(tooltips=[('Importance', '@y_data')], renderers=circles,
mode='vline'))
plot.x_range = Range1d(0, 25)
legend = Legend(items=items)
plot.add_layout(legend, 'right')
plot.legend.click_policy='hide'
show(plot)
plot_coef_data(coef_data['USFLUX'], time_data)
###Output
_____no_output_____
###Markdown
The plot above is confusing. If we want to plot only specific features, we can manipulate the `coef_data` before passing it into `plot_coef_data`.We will splice the data such that we only plot `coef 2` and `coef 6`. These specific variables are meaningful because `coef 2` corresponds to the first half of the lag time and `coef 6` corresponds to the last half of the lag time. This is due to the properties of B-splines. Note: this is only true if we are plotting coefficients for `spline_features`.
###Code
spliced_data = list(map(lambda x : [x[1], x[-2]], coef_data['USFLUX']))
plot_coef_data(spliced_data, time_data)
###Output
_____no_output_____
###Markdown
Lastly, we have a function that plots the importance of the feature for spline fitting over time. This is built by using the ratios of the two variables above. Since the ratio between the two coefficients corresponds to the relative importance of the first and second half of the lag time, we can make a plot that reflects this.
###Code
from sklearn.linear_model import LinearRegression
def plot_spline_feature_importance(coef_data, time_data):
'''This method takes coefficient data and time data, and creates a plot of the
importance of the time series data for the spline model over time.
'''
first_and_last = list(map(lambda x : [abs(x[1]), abs(x[-2])], coef_data))
def normalize_points(point):
'''This method takes a list of two values and returns the normalized list.
It is normalized such that both numbers sum to 1.
'''
if point[0] == 0 and point[1] == 0: return [0.5, 0.5] # Inconclusive
else:
point_sum = point[0] + point[1]
return [point[0] / point_sum, point[1] / point_sum]
normalized_data = list(map(normalize_points, first_and_last))
time_dict = {}
for i, t in enumerate(time_data):
contains_nan = False
for coef in normalized_data[i]:
if np.isnan(coef): contains_nan = True
if contains_nan: continue
time_dict[t] = normalized_data[i]
time_points, data_points, data_point_ranges = [], [], []
for i, lag_time in enumerate(time_dict.keys()):
if i == 0:
time_points += [24 - lag_time * 3/4, 24 - lag_time/4]
data_points += time_dict[lag_time]
data_point_ranges += [(24 - lag_time, 24 - lag_time/2),
(24 - lag_time/2, 24)]
else:
# Keep track of areas since many areas overlap
second_half_area, first_half_area = 0, 0
second_half_range = (24 - lag_time/2, 24)
first_half_range = (24 - lag_time, 24 - lag_time/2)
for j, d in enumerate(data_point_ranges):
second_overlap, first_overlap = 0, 0
if second_half_range[1] > d[0]:
second_overlap = (min(second_half_range[1], d[1]) -
max(second_half_range[0], d[0]))
if second_overlap < 0: second_overlap = 0
second_half_area += second_overlap * data_points[j]
if first_half_range[1] > d[0]:
first_overlap = min(first_half_range[1], d[1]) - d[0]
first_half_area += first_overlap * data_points[j]
width = 1
# Adding 0.1 smooths the ratios
ratio = (time_dict[lag_time][0] + 0.1) / (time_dict[lag_time][1] + 0.1)
if ratio * second_half_area - first_half_area < 0:
average_ratio = (first_half_area / second_half_area + ratio) / 2
factor = average_ratio / (first_half_area / second_half_area)
for k, d in enumerate(data_point_ranges):
if first_half_range[1] > d[0]:
data_points[k] *= factor
data_points.append(0)
else:
data_points.append((ratio * second_half_area - first_half_area) / width)
data_point_ranges.append((24 - lag_time, 24 - lag_time + width))
time_points.append(24 - lag_time * 3/4)
areas = ([x * (data_point_ranges[i][1] - data_point_ranges[i][0])
for i, x in enumerate(data_points)])
total_area = sum(areas)
data_points = list(map(lambda x : x / total_area, data_points))
# Create plot
plot = figure(plot_width=600, plot_height=300, tools='', x_range=[0,24],
toolbar_location=None, title='Feature importance over time',
x_axis_label='Time', y_axis_label='Importance')
source = ColumnDataSource(data=dict(
x_data = time_points,
y_data = data_points
))
plot.circle('x_data', 'y_data', size=10, source=source,
fill_color='red', fill_alpha=1, line_color=None)
# To avoid division by 0, replace all 0s with 0.01
data_points = list(map(lambda x : x + 0.01, data_points))
reg = LinearRegression().fit(np.array(time_points).reshape(-1, 1), data_points)
plot.line([time_data[0], time_data[-1]],
[reg.predict([[time_data[0]]])[0],
reg.predict([[time_data[-1]]])[0]], line_width=2)
show(plot)
plot_spline_feature_importance(coef_data['USFLUX'], time_data)
###Output
_____no_output_____
###Markdown
Lastly, we can plot the difference in the importance of the first half of the lag time (coefficient 2) versus the importance of the last half of the lag time (coefficient 6)
###Code
def plot_difference_data(coef_data, time_data):
'''
'''
normalized_coef_data = {}
def normalize_points(point):
'''This method takes a list of two values and returns the ratio of the
second data point to the first data point.
'''
if point[0] == 0 and point[1] == 0: return 1 # Inconclusive
else:
point_sum = point[0] + point[1]
return (point[1] - point[0]) / point_sum
for coef in coef_data:
normalized_coef_data[coef] = list(map(lambda x : [abs(x[1]), abs(x[-2])], coef_data[coef]))
normalized_coef_data[coef] = list(map(normalize_points, normalized_coef_data[coef]))
# Basic plot setup
plot = figure(plot_width=600, plot_height=400, tools='',
toolbar_location=None, title='Lag time vs. ratios',
x_axis_label='Lag time (hr)', y_axis_label='Difference')
circles = []
items = []
for i, var in enumerate(normalized_coef_data):
source = ColumnDataSource(data=dict(
x_data = time_data,
y_data = normalized_coef_data[var]
))
item = plot.line('x_data', 'y_data', line_width=1, color=COLORS[i], source=source)
items.append((var + ' ratio', [item]))
circles.append(plot.circle('x_data', 'y_data', size=10, source=source,
fill_color=COLORS[i], hover_fill_color=COLORS[i],
fill_alpha=0.25, hover_alpha=0.5,
line_color=None, hover_line_color='white'))
plot.add_tools(HoverTool(tooltips=[('Ratio', '@y_data')], renderers=circles,
mode='vline'))
plot.x_range = Range1d(0, 25)
legend = Legend(items=items)
plot.add_layout(legend, 'right')
plot.legend.click_policy='hide'
show(plot)
plot_difference_data(coef_data, time_data)
###Output
_____no_output_____ |
sagemaker/sm-special-webinar/lab_1_training/1.2.SageMaker-Training+Experiments.ipynb | ###Markdown
1.2 SageMaker Training with Experiments 학습 작업의 실행 노트북 개요- SageMaker Training에 SageMaker 실험을 추가하여 여러 실험의 결과를 비교할 수 있습니다. - [작업 실행 시 필요 라이브러리 import](작업-실행-시-필요-라이브러리-import) - [SageMaker 세션과 Role, 사용 버킷 정의](SageMaker-세션과-Role,-사용-버킷-정의) - [하이퍼파라미터 정의](하이퍼파라미터-정의) - [학습 실행 작업 정의](학습-실행-작업-정의) - 학습 코드 명 - 학습 코드 폴더 명 - 학습 코드가 사용한 Framework 종류, 버전 등 - 학습 인스턴스 타입과 개수 - SageMaker 세션 - 학습 작업 하이퍼파라미터 정의 - 학습 작업 산출물 관련 S3 버킷 설정 등 - [학습 데이터셋 지정](학습-데이터셋-지정) - 학습에 사용하는 데이터셋의 S3 URI 지정 - [SageMaker 실험 설정](SageMaker-실험-설정) - [학습 실행](학습-실행) - [데이터 세트 설명](데이터-세트-설명) - [실험 결과 보기](실험-결과-보기) 작업 실행 시 필요 라이브러리 import
###Code
import boto3
import sagemaker
###Output
_____no_output_____
###Markdown
SageMaker 세션과 Role, 사용 버킷 정의
###Code
sagemaker_session = sagemaker.session.Session()
role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
code_location = f's3://{bucket}/xgboost/code'
output_path = f's3://{bucket}/xgboost/output'
###Output
_____no_output_____
###Markdown
하이퍼파라미터 정의
###Code
hyperparameters = {
"scale_pos_weight" : "29",
"max_depth": "3",
"eta": "0.2",
"objective": "binary:logistic",
"num_round": "100",
}
###Output
_____no_output_____
###Markdown
학습 실행 작업 정의
###Code
instance_count = 1
instance_type = "ml.m5.xlarge"
use_spot_instances = True
max_run=1*60*60
max_wait=1*60*60
from sagemaker.xgboost.estimator import XGBoost
xgb_estimator = XGBoost(
entry_point="xgboost_starter_script.py",
source_dir="src",
output_path=output_path,
code_location=code_location,
hyperparameters=hyperparameters,
role=role,
sagemaker_session=sagemaker_session,
instance_count=instance_count,
instance_type=instance_type,
framework_version="1.3-1",
max_run=max_run,
use_spot_instances=use_spot_instances, # spot instance 활용
max_wait=max_wait,
)
###Output
_____no_output_____
###Markdown
학습 데이터셋 지정
###Code
data_path=f's3://{bucket}/xgboost/dataset'
!aws s3 sync ../data/dataset/ $data_path
###Output
_____no_output_____
###Markdown
SageMaker 실험 설정
###Code
# !pip install -U sagemaker-experiments
experiment_name='xgboost-poc-1'
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
from time import strftime
def create_experiment(experiment_name):
try:
sm_experiment = Experiment.load(experiment_name)
except:
sm_experiment = Experiment.create(experiment_name=experiment_name,
tags=[
{
'Key': 'model_name',
'Value': 'xgboost'
}
])
def create_trial(experiment_name, i_type, i_cnt, spot=False):
create_date = strftime("%m%d-%H%M%s")
algo = 'xgboost'
spot = 's' if spot else 'd'
i_type = i_type[3:9].replace('.','-')
trial = "-".join([i_type,str(i_cnt),algo, spot])
sm_trial = Trial.create(trial_name=f'{experiment_name}-{trial}-{create_date}',
experiment_name=experiment_name)
job_name = f'{sm_trial.trial_name}'
return job_name
###Output
_____no_output_____
###Markdown
학습 실행
###Code
create_experiment(experiment_name)
job_name = create_trial(experiment_name, instance_type, instance_count, use_spot_instances)
xgb_estimator.fit(inputs = {'train': data_path},
job_name = job_name,
experiment_config={
'TrialName': job_name,
'TrialComponentDisplayName': job_name,
},
wait=False)
xgb_estimator.logs()
###Output
_____no_output_____
###Markdown
실험 결과 보기위의 실험한 결과를 확인 합니다.- 각각의 훈련잡의 시도에 대한 훈련 사용 데이터, 모델 입력 하이퍼 파라미터, 모델 평가 지표, 모델 아티펙트 결과 위치 등의 확인이 가능합니다.- **아래의 모든 내용은 SageMaker Studio 를 통해서 직관적으로 확인이 가능합니다.**
###Code
from sagemaker.analytics import ExperimentAnalytics
import pandas as pd
pd.options.display.max_columns = 50
pd.options.display.max_rows = 5
pd.options.display.max_colwidth = 50
trial_component_training_analytics = ExperimentAnalytics(
sagemaker_session= sagemaker_session,
experiment_name= experiment_name,
sort_by="metrics.validation:auc.max",
sort_order="Descending",
metric_names=["validation:auc"]
)
trial_component_training_analytics.dataframe()[['Experiments', 'Trials', 'validation:auc - Min', 'validation:auc - Max',
'validation:auc - Avg', 'validation:auc - StdDev', 'validation:auc - Last',
'eta', 'max_depth', 'num_round', 'scale_pos_weight']]
###Output
_____no_output_____ |
source/notebooks/L5/static_maps.ipynb | ###Markdown
Static maps Download dataBefore we start you need to download (and then extract) the dataset zip-package used during this lesson [from this link](https://github.com/Automating-GIS-processes/Lesson-5-Making-Maps/raw/master/data/dataE5.zip).You should have following Shapefiles in the `dataE5` folder: - addresses.shp - metro.shp - roads.shp - some.geojson - TravelTimes_to_5975375_RailwayStation.shp - Vaestotietoruudukko_2015.shpExtract the files into a folder called `data`:``` $ cd /home/jovyan/notebooks/L5 $ wget https://github.com/Automating-GIS-processes/Lesson-5-Making-Maps/raw/master/data/dataE5.zip $ unzip dataE5.zip -d data``` Static maps in GeopandasWe have already seen during the previous lessons quite many examples how to create static maps using Geopandas.Thus, we won't spend too much time repeating making such maps but let's create a one with more layers on it than just one which kind we have mostly done this far.- Let's create a static accessibility map with roads and metro line on it.
###Code
import geopandas as gpd
import matplotlib.pyplot as plt
%matplotlib inline
# Filepaths
grid_fp = "data/TravelTimes_to_5975375_RailwayStation.shp"
roads_fp = "data/roads.shp"
metro_fp = "data/metro.shp"
# Read files
grid = gpd.read_file(grid_fp)
roads = gpd.read_file(roads_fp)
metro = gpd.read_file(metro_fp)
###Output
_____no_output_____
###Markdown
Next, we need to be sure that the files are in the same coordinate system. - Let's use the crs of our travel time grid as basis:
###Code
# Get the CRS of the grid
CRS = grid.crs
# Reproject geometries using the crs of travel time grid
roads['geometry'] = roads['geometry'].to_crs(crs=CRS)
metro['geometry'] = metro['geometry'].to_crs(crs=CRS)
###Output
_____no_output_____
###Markdown
- Finally we can make a visualization using the `.plot()` -function in Geopandas.
###Code
# Visualize the travel times into 9 classes using "Quantiles" classification scheme
# Add also a little bit of transparency with `alpha` parameter
# (ranges from 0 to 1 where 0 is fully transparent and 1 has no transparency)
my_map = grid.plot(column="car_r_t", linewidth=0.03, cmap="Reds", scheme="quantiles", k=9, alpha=0.9)
# Add roads on top of the grid
# (use ax parameter to define the map on top of which the second items are plotted)
roads.plot(ax=my_map, color="grey", linewidth=1.5)
# Add metro on top of the previous map
metro.plot(ax=my_map, color="red", linewidth=2.5)
# Remove the empty white-space around the axes
plt.tight_layout()
# Save the figure as png file with resolution of 300 dpi
outfp = "static_map.png"
plt.savefig(outfp, dpi=300)
###Output
_____no_output_____
###Markdown
Adding basemap from external sourceIt is often useful to add a basemap to your visualization that shows e.g. streets and their names and other useful information directly underneath your visualization. This can be done easily by using ready-made background map tiles that are provided by different providers such as [OpenStreetMap](https://wiki.openstreetmap.org/wiki/Tiles) or [Stamen Design](http://maps.stamen.com). A Python library called [contextily](https://github.com/darribas/contextily) is a handy package that can be used to fetch geospatial raster files and add them to your maps. Map tiles are typically distributed in [Web Mercator projection (EPSG:3857)](http://spatialreference.org/ref/sr-org/epsg3857-wgs84-web-mercator-auxiliary-sphere/), hence it is necessary to reproject all the spatial data into [Web Mercator](https://en.wikipedia.org/wiki/Web_Mercator_projection) before visualizing the data.In this tutorial, we will see how to add a basemap underneath our previous visualization.- First, we need to read data and reproject it to ESPG 3857 projection (Web Mercator)
###Code
import geopandas as gpd
import matplotlib.pyplot as plt
import contextily as ctx
%matplotlib inline
# Filepaths
grid_fp = "data/TravelTimes_to_5975375_RailwayStation.shp"
# Read data
grid = gpd.read_file(grid_fp)
# Reproject to EPSG 3857
data = grid.to_crs(epsg=3857)
print(data.crs)
print(data.head(2))
###Output
{'init': 'epsg:3857', 'no_defs': True}
car_m_d car_m_t car_r_d car_r_t from_id pt_m_d pt_m_t pt_m_tt \
0 32297 43 32260 48 5785640 32616 116 147
1 32508 43 32471 49 5785641 32822 119 145
pt_r_d pt_r_t pt_r_tt to_id walk_d walk_t \
0 32616 108 139 5975375 32164 459
1 32822 111 133 5975375 29547 422
geometry
0 POLYGON ((2767221.645887391 8489079.100944027,...
1 POLYGON ((2767726.328605124 8489095.52034446, ...
###Markdown
Now as we can see, the data has been projected to `epsg:3857` that can be confirmed by looking at the coordinate values in `geometry` column (they are now represented in metric system).- Next, we can plot our data using Geopandas and add a basemap for our plot by using a function called `.add_basemap()` from contextily:
###Code
# Plot the data
ax = data.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap
ctx.add_basemap(ax)
###Output
_____no_output_____
###Markdown
As we can see, now the map has a background map that is by default using a style `ST_Terrain` fetched from [Stymen Design](http://maps.stamen.com/terrain). However, there are various other possible data sources and styles that can be used. - `tile_providers` contain some of the basic url-addresses for different providers and styles that can be used to control the appearence of your background map:
###Code
print(dir(ctx.tile_providers))
###Output
['OSM_A', 'OSM_B', 'OSM_C', 'ST_TERRAIN', 'ST_TERRAIN_BACKGROUND', 'ST_TERRAIN_LABELS', 'ST_TERRAIN_LINES', 'ST_TONER', 'ST_TONER_BACKGROUND', 'ST_TONER_HYBRID', 'ST_TONER_LABELS', 'ST_TONER_LINES', 'ST_TONER_LITE', 'ST_WATERCOLOR', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']
###Markdown
Here, all the names written in capital letters are the ones that can be used as different basemap styles. All names starting with `ST_` are from Stamen Design, and the `OSM_A` (B and C) are a basic map tile style provided by OpenStreetMap. Notice that the letters A, B, and C are only directing to different tile servers, they are not changing the style. It is also possible to pass other tile providers by passing in the url for the tile provider that you are interested in.- It is possible to change the tile provider by passing an address to the tile providers' web address using `url` -parameter in `add_basemap()`. Let's see how we can change the style to `OSM_A` which gives us a background map based on OpenStreetMap:
###Code
# Plot the data
ax = data.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `ST_TONER` style
ctx.add_basemap(ax, url=ctx.tile_providers.OSM_A)
###Output
_____no_output_____
###Markdown
As we can see, now the background map changed a bit compared to the earlier one as it was fetched from OpenSteetMap. - Let's take a subset of our data to see a bit better the background map characteristics.
###Code
# Take only grid cells with `pt_r_t` travel time values ranging from 0-10
subset = data.loc[(data['pt_r_t']>=0) & (data['pt_r_t']<=10)]
# Plot the data from subset
ax = subset.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `OSM_A` style
ctx.add_basemap(ax, url=ctx.tile_providers.OSM_A)
###Output
_____no_output_____
###Markdown
As we can see now our map has much more details in it as the zoom level of the background map is larger. By default `contextily` sets the zoom level automatically but it is possible to also control that manually using parameter `zoom`. The zoom level is by default specified as `auto` but you can control that by passing in [zoom level](https://wiki.openstreetmap.org/wiki/Zoom_levels) as numbers ranging typically from 1 to 19 (the larger the number, the more details your basemap will have).- Let's reduce the level of detail from our map by passing `zoom=11`:
###Code
# Plot the data from subset
ax = subset.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `OSM_A` style using zoom level of 11
ctx.add_basemap(ax, zoom=11, url=ctx.tile_providers.OSM_A)
###Output
_____no_output_____
###Markdown
As we can see, the map has now less detail but it also zoomed out the view from our data.- We can use `ax.set_xlim()` and `ax.set_ylim()` -parameters to crop our map. The parameters takes as input the coordinates for minimum and maximum on both axis (x and y). We can also change / remove the contribution text by using parameter `attribution`:
###Code
# Plot the data from subset
ax = subset.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `OSM_A` style using zoom level of 11
# Remove the attribution info by passing empty string into it
ctx.add_basemap(ax, zoom=11, attribution="", url=ctx.tile_providers.OSM_A)
# Crop the figure
ax.set_xlim(2770000, 2785000)
ax.set_ylim(8435000, 8442500)
###Output
_____no_output_____
###Markdown
As we can see now the image was cropped to fit better the extent of our data. It is also possible to use many other map tiles from different providers as the background map. A good list of different available sources can be found from [here](http://leaflet-extras.github.io/leaflet-providers/preview/). When using map tiles from different sources, it is necessary to parse a url address to the tile provider following a format defined by the provider. Next, we will see how to use map tiles provided by CartoDB. To do that we need to parse the url address following their [definition](https://github.com/CartoDB/basemap-styles1-web-raster-basemaps) `'https://{s}.basemaps.cartocdn.com/{style}/{z}/{x}/{y}{r}.png'` where: - {s}: one of the available subdomains, either [a,b,c,d] - {z} : Zoom level. We support from 0 to 20 zoom levels in OSM tiling system. - {x},{y}: Tile coordinates in OSM tiling system - {scale}: OPTIONAL "@2x" for double resolution tiles - {style}: Map style, possible value is one of: - light_all, - dark_all, - light_nolabels, - light_only_labels, - dark_nolabels, - dark_only_labels, - rastertiles/voyager, - rastertiles/voyager_nolabels, - rastertiles/voyager_only_labels, - rastertiles/voyager_labels_under - We will use this information to parse the parameters in a way that contextily wants them:
###Code
# The formatting should follow: 'https://{s}.basemaps.cartocdn.com/{style}/{z}/{x}/{y}{r}.png'
# Specify the style to use
style = "rastertiles/voyager"
cartodb_url = 'https://a.basemaps.cartocdn.com/%s/tileZ/tileX/tileY.png' % style
# Plot the data from subset
ax = subset.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `OSM_A` style using zoom level of 14
# Remove the attribution info by passing empty string into it
ctx.add_basemap(ax, zoom=14, attribution="", url=cartodb_url)
# Crop the figure
ax.set_xlim(2770000, 2785000)
ax.set_ylim(8435000, 8442500)
###Output
_____no_output_____
###Markdown
As we can see now we have yet again different kind of background map, now coming from CartoDB. - Let's make a minor modification and change the style from `"rastertiles/voyager"` to `"dark_all"`:
###Code
# The formatting should follow: 'https://{s}.basemaps.cartocdn.com/{style}/{z}/{x}/{y}{r}.png'
# Specify the style to use
style = "dark_all"
cartodb_url = 'https://a.basemaps.cartocdn.com/%s/tileZ/tileX/tileY.png' % style
# Plot the data from subset
ax = subset.plot(column='pt_r_t', cmap='RdYlBu', linewidth=0, scheme="quantiles", k=9, alpha=0.6)
# Add basemap with `OSM_A` style using zoom level of 14
# Remove the attribution info by passing empty string into it
ctx.add_basemap(ax, zoom=13, attribution="", url=cartodb_url)
# Crop the figure
ax.set_xlim(2770000, 2785000)
ax.set_ylim(8435000, 8442500)
###Output
_____no_output_____ |
Double-Dueling-DQN.ipynb | ###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
from __future__ import division
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
from helper2 import make_gif
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
print(env.actions)
###Output
4
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
xavier_init = tf.contrib.layers.xavier_initializer()
self.AW = tf.Variable(xavier_init([h_size//2,env.actions]))
self.VW = tf.Variable(xavier_init([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
#tfVars are all the trainable values of the computation graph, e.i. all the weights of the networks (main and target)
#tau is the the ratio to which we update the Target network with respect to the Main network
total_vars = len(tfVars)
op_holder = []
#Here we need to understand the structure of the tfVars array.
#The first half entries are the trainable values of the Main Network
#The last half entries are the trainable values of the Main Network
for idx,var in enumerate(tfVars[0:total_vars//2]):
#New_targetNet_values = tau * New_MainNet_values + (1 - tau) * Old_MainNet_values
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
#This function just runs the session to compute the above expression
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
annealing_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
saveframes = True
if saveframes == True:
if not os.path.exists('./DDDQNframes'):
os.makedirs('./DDDQNframes')
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
#trainable variables of the Main Network and the Target Network
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/annealing_steps
#create lists to contain total rewards and steps per episodes
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
for i in range(num_episodes):
#GAther episode frames to create gifs eventuly
episode_frames = []
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0] #Feed through main network to predict action
s1,r,d = env.step(a)
episode_frames.append(s1)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop #epsilon annealing
if total_steps % (update_freq) == 0:
#Get a random batch of experiences from the episode buffer
trainBatch = myBuffer.sample(batch_size)
#Below we perform the Double-DQN update to the target Q-values
#First we calculate the best actions for state s1 in each experience of the batch using our Main Network
A = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
#Then we calculate the Qvalues for every selected experiences in the batch using our Target Network.
#So Q2 is a 2D-array containing a vector of Q-values for each randomly selected experiences in the batch
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
#the end multiplier goes to zero if the experience is an "end of game", so the target Q-value = the reward
end_multiplier = -(trainBatch[:,4] - 1)
#doubleQs are the Q-values estimated from Q2 at state s1 given action A = argmax(Q1(s1,:)).
#So doubleQ is a vector containing the estimated Q-values for the best action possible
#for each randomly selected experience in the batch.
doubleQ = Q2[range(batch_size),A]
# Target-Q = r + gamma*(doubleQ) for non "end-of-game" experiences. Otherwise Target-Q = Reward
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the Main network with our target Q-values.
_ = sess.run(mainQN.updateModel, \
#loss function to optimize
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
#Update the Target network toward the main network but slowly (with a tau rate)
updateTarget(targetOps,sess)
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Saved Model")
time_per_step = 0.05
images = np.array(episode_frames)
make_gif(images,'./DDDQNframes/image'+str(i)+'.gif',
duration=len(images)*time_per_step,true_image=True,salience=False)
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
###Output
Saved Model
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
self.AW = tf.Variable(tf.random_normal([h_size//2,env.actions]))
self.VW = tf.Variable(tf.random_normal([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/anneling_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Saved Model")
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
###Output
Saved Model
(500, 2.2999999999999998, 1)
(1000, 1.8, 1)
(1500, 1.6000000000000001, 1)
(2000, 2.6000000000000001, 1)
(2500, 1.8999999999999999, 1)
(3000, 2.0, 1)
(3500, 1.3, 1)
(4000, 4.0999999999999996, 1)
(4500, 2.6000000000000001, 1)
(5000, 2.8999999999999999, 1)
(5500, 2.5, 1)
(6000, 1.3, 1)
(6500, 2.6000000000000001, 1)
(7000, 1.8, 1)
(7500, 1.8, 1)
(8000, 1.8999999999999999, 1)
(8500, 2.0, 1)
(9000, 2.1000000000000001, 1)
(9500, 1.8, 1)
(10000, 1.8999999999999999, 1)
(10500, 2.5, 0.9549999999999828)
(11000, 2.3999999999999999, 0.9099999999999655)
(11500, 0.69999999999999996, 0.8649999999999483)
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=512,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
self.AW = tf.Variable(tf.random_normal([h_size/2,env.actions]))
self.VW = tf.Variable(tf.random_normal([h_size/2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars/2]):
op_holder.append(tfVars[idx+total_vars/2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars/2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/anneling_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print 'Loading Model...'
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print "Saved Model"
if len(rList) % 10 == 0:
print total_steps,np.mean(rList[-10:]), e
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print "Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%"
###Output
_____no_output_____
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)/100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
import gym
import numpy as np
import random
import tensorflow as tf
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = tf.contrib.layers.convolution2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = tf.contrib.layers.convolution2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = tf.contrib.layers.convolution2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = tf.contrib.layers.convolution2d( \
inputs=self.conv3,num_outputs=512,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(3,2,self.conv4)
self.streamA = tf.contrib.layers.flatten(self.streamAC)
self.streamV = tf.contrib.layers.flatten(self.streamVC)
self.AW = tf.Variable(tf.random_normal([h_size/2,env.actions]))
self.VW = tf.Variable(tf.random_normal([h_size/2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.sub(self.Advantage,tf.reduce_mean(self.Advantage,reduction_indices=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.mul(self.Qout, self.actions_onehot), reduction_indices=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars/2]):
op_holder.append(tfVars[idx+total_vars/2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars/2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.initialize_all_variables()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/anneling_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
if load_model == True:
print 'Loading Model...'
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
sess.run(init)
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
rAll += r
s = s1
if d == True:
break
#Get all experiences from this episode and discount their rewards.
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print "Saved Model"
if len(rList) % 10 == 0:
print total_steps,np.mean(rList[-10:]), e
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print "Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%"
###Output
_____no_output_____
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)/100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
xavier_init = tf.contrib.layers.xavier_initializer()
self.AW = tf.Variable(xavier_init([h_size//2,env.actions]))
self.VW = tf.Variable(xavier_init([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
annealing_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/annealing_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Update the target network toward the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Saved Model")
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
###Output
Saved Model
(500, 2.2999999999999998, 1)
(1000, 1.8, 1)
(1500, 1.6000000000000001, 1)
(2000, 2.6000000000000001, 1)
(2500, 1.8999999999999999, 1)
(3000, 2.0, 1)
(3500, 1.3, 1)
(4000, 4.0999999999999996, 1)
(4500, 2.6000000000000001, 1)
(5000, 2.8999999999999999, 1)
(5500, 2.5, 1)
(6000, 1.3, 1)
(6500, 2.6000000000000001, 1)
(7000, 1.8, 1)
(7500, 1.8, 1)
(8000, 1.8999999999999999, 1)
(8500, 2.0, 1)
(9000, 2.1000000000000001, 1)
(9500, 1.8, 1)
(10000, 1.8999999999999999, 1)
(10500, 2.5, 0.9549999999999828)
(11000, 2.3999999999999999, 0.9099999999999655)
(11500, 0.69999999999999996, 0.8649999999999483)
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
xavier_init = tf.contrib.layers.xavier_initializer()
self.AW = tf.Variable(xavier_init([h_size//2,env.actions]))
self.VW = tf.Variable(xavier_init([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/anneling_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Saved Model")
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
###Output
Saved Model
(500, 2.2999999999999998, 1)
(1000, 1.8, 1)
(1500, 1.6000000000000001, 1)
(2000, 2.6000000000000001, 1)
(2500, 1.8999999999999999, 1)
(3000, 2.0, 1)
(3500, 1.3, 1)
(4000, 4.0999999999999996, 1)
(4500, 2.6000000000000001, 1)
(5000, 2.8999999999999999, 1)
(5500, 2.5, 1)
(6000, 1.3, 1)
(6500, 2.6000000000000001, 1)
(7000, 1.8, 1)
(7500, 1.8, 1)
(8000, 1.8999999999999999, 1)
(8500, 2.0, 1)
(9000, 2.1000000000000001, 1)
(9500, 1.8, 1)
(10000, 1.8999999999999999, 1)
(10500, 2.5, 0.9549999999999828)
(11000, 2.3999999999999999, 0.9099999999999655)
(11500, 0.69999999999999996, 0.8649999999999483)
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____
###Markdown
Simple Reinforcement Learning with Tensorflow Part 4: Deep Q-Networks and BeyondIn this iPython notebook I implement a Deep Q-Network using both Double DQN and Dueling DQN. The agent learn to solve a navigation task in a basic grid world. To learn more, read here: https://medium.com/p/8438a3e2b8dfFor more reinforcment learning tutorials, see:https://github.com/awjuliani/DeepRL-Agents
###Code
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the game environment Feel free to adjust the size of the gridworld. Making it smaller provides an easier task for our DQN agent, while making the world larger increases the challenge.
###Code
from gridworld import gameEnv
env = gameEnv(partial=False,size=5)
###Output
_____no_output_____
###Markdown
Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode. Implementing the network itself
###Code
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
xavier_init = tf.contrib.layers.xavier_initializer()
self.AW = tf.Variable(xavier_init([h_size//2,env.actions]))
self.VW = tf.Variable(xavier_init([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Experience Replay This class allows us to store experies and sample then randomly to train the network.
###Code
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
###Output
_____no_output_____
###Markdown
This is a simple function to resize our game frames.
###Code
def processState(states):
return np.reshape(states,[21168])
###Output
_____no_output_____
###Markdown
These functions allow us to update the parameters of our target network with those of the primary network.
###Code
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
###Output
_____no_output_____
###Markdown
Training the network Setting all the training parameters
###Code
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
annealing_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/annealing_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Update the target network toward the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Saved Model")
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.ckpt')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
###Output
Saved Model
(500, 2.2999999999999998, 1)
(1000, 1.8, 1)
(1500, 1.6000000000000001, 1)
(2000, 2.6000000000000001, 1)
(2500, 1.8999999999999999, 1)
(3000, 2.0, 1)
(3500, 1.3, 1)
(4000, 4.0999999999999996, 1)
(4500, 2.6000000000000001, 1)
(5000, 2.8999999999999999, 1)
(5500, 2.5, 1)
(6000, 1.3, 1)
(6500, 2.6000000000000001, 1)
(7000, 1.8, 1)
(7500, 1.8, 1)
(8000, 1.8999999999999999, 1)
(8500, 2.0, 1)
(9000, 2.1000000000000001, 1)
(9500, 1.8, 1)
(10000, 1.8999999999999999, 1)
(10500, 2.5, 0.9549999999999828)
(11000, 2.3999999999999999, 0.9099999999999655)
(11500, 0.69999999999999996, 0.8649999999999483)
###Markdown
Checking network learning Mean reward over time
###Code
rMat = np.resize(np.array(rList),[len(rList)//100,100])
rMean = np.average(rMat,1)
plt.plot(rMean)
###Output
_____no_output_____ |
5. Unsupervised Learning.ipynb | ###Markdown
Unsupervised Learning: k-Means ClusteringWe hope you enjoy the tutorial! Before we start diving into the material, let's make sure that you have your environment up and running. Simply run the code below -- if things break, you can install the dependencies using pip or conda. 0. Setup
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
from sklearn.cluster import KMeans
from sklearn.utils import shuffle
from time import time
###Output
_____no_output_____
###Markdown
1. What's Unsupervised Learning?The basic notion behind machine learning is that you're given a dataset with an interesting backstory, and it's up to you to figure out what that story is. Maybe you want to predict the next big thing that will break the stock market, or understand the relationship between students' stress levels and pounds of chocolate consumption. In both cases, you're looking at the interactions of several different things and uncovering the hidden patterns that allow you to draw insightful conclusions from this data.We can break down such problems into two categories: supervised and unsupervised.- Supervised learning is when your explanatory variables X come with an associated reponse variable Y. You get a sneak peak at the true "labels": for example, for all the participants in a clinical trial, you're told whether their treatments were successful or not.- In *unsupervised learning*, sorry -- no cheating. You get a bunch of X's without the Y's. There's some ground truth we don't have access to. So we have to do our best to extract some meaning out of the data's underlying structure and check to make sure that our methods are robust. One example of an unsupervised learning algorithm is clustering, which we'll practice today! 2. ClusteringClustering is what it sounds like: grouping “similar” data points into *clusters* or *subgroups*, while keeping each group as distinct as possible. The data points belonging to different clusters should be different from each other, too. Often, we'll come across datasets that exhibit this kind of grouped structure. **k-Means** is one of many ways to perform clustering on your data.But wait -- these are vague concepts. What does it mean for two data points to be "similar?" And are we actually moving the points around physically when we group them together? These are all good questions, so let’s walk through some vocab before we walk through the steps of the k-means clustering algorithm:> 2a. SimilarityIntuitively, it makes sense that similar things should be close to each other, while different things should be far apart. To formalize the notion of **similarity**, we choose a **distance metric** (see below) that quantifies how "close" two points are to each other. The most commonly used distance metric is Euclidean distance (think: distance formula from middle school), and that's what we'll use in our example today. We'll introduce other distance metrics towards the end of the notebook. > 2b. Cluster centroidThe **cluster centroid** is the most representative feature of an entire cluster. We say "feature" instead of "point" because the centroid may not necessarily be an existing data point of the cluster. To find a cluster's centroid, average the values of all the points belonging to that cluster. Thus, the cluster centroid gives nice summary information about all the points in its cluster. Think of it as the cluster's (democratic) president. 3. The k-means Algorithm---The k-means algorithm has a simple objective: given a set of data points, it tries to separate them into *k* distinct clusters. It uses the same principle that we mentioned earlier: keep the data points within each cluster as similar as possible. You have to provide the value of **k** to the algorithm, so you should have a general idea of how many clusters you expect to see in the data. Let’s start by tossing all of our data points onto the screen to see what the data actually looks like. This kind of exploratory data visualization can provide a rough guide as to how to start clustering our data. Remember that clustering is an *unsupervised learning method*, so we’re never going to have a perfect answer for our final clusters. But let’s do our best to get results that are **reasonable** and **replicable**.- Replicable: someone else can arrive at our results from a different starting point- Reasonable: our results show some correlation with what we expect to encounter in real lifeLet's take a look at a toy example:> (a) Our data seems to have some sort of underlying structure. Let’s use this information to initialize our k-means algorithm with k = 3 clusters. For now we assume that we know how many clusters we want, but we’ll go into more detail later about relaxing this assumption and how to choose “the best possible k”. > (b) We want 3 clusters, so first we randomly “throw down” three random cluster centroids. Every iteration of k-means will eventually "correct" them towards the right clusters. Since we are heading to a correct answer anyway, we don't care about where we start. > These centroids are our “representative points” -- they contain all the information that we need about other points in the same cluster. It makes sense to think about centroids as being the physical center of each cluster. So let’s pretend that our randomly initialized cluster centers are the actual centroids, and group our points accordingly. Here we use our distance metric of choice -- Euclidean distance. For every data point, we compute its distance to each centroid, and assign the data point whichever centroid is closest (smallest distance).> (c)Now we have something that’s starting to resemble three distinct clusters! But we need to update the centroids that we started with -- we’ve just added in a bunch of new data points to each cluster, so we need our “representative point,” the centroid, to reflect that. > (d)-(e)-(f)-(g)Let's average all the values within each cluster and call that our new centroid. These new centroids are further "within" the data than the older centroids. > Notice that we’re not quite done yet -- we have some straggling points which don’t seem to belong to any cluster. Let’s run another iteration of k-means and see if that separates out the clusters better. This means that we’re computing the distances from each data point to every centroid, and re-assign those that are closer to centroids of another cluster.> (h)We keep computing the centroids for every iteration using the steps (c) and (d). After a few iterations, maybe you notice that the clusters don’t change after a certain point. This actually turns out to be a good criterion for stopping the cluster iterations!> There’s no need to keep running the algorithm if our answer doesn't change after a certain point in time. That's just wasting time and computational resources. We can formalize this idea of a “stopping criterion.” We define a small value, call it “epsilon”, and terminate the algorithm when the change in cluster centroids is less than epsilon. This way, epsilon serves as a measure of how much error we can tolerate. 4. Image Segmentation ExampleLet's move on to a real-life example. You can access images in the `datasets/kmeans/imgs` folder. - We know that images often have a few dominant colors -- for example, the bulk of an image is often made up of the foreground color and background color.- In this example, we'll write some code that uses `scikit-learn`'s k-means clustering implementation to find what these dominant colors may be. `scikit-learn`, or `sklearn` for short, is a package of built-in machine learning algorithms all coded up and ready to use. - Once we know what the most important colors are in an image, we can compress (or "quantize") the image by re-expressing the image using only the set of k colors that we get from the algorithm. Let's try it!
###Code
# let's list what images we have to work with
imgs = os.listdir('datasets/kmeans/imgs/')
print(imgs)
###Output
['leo_bb.png', 'mario.png']
###Markdown
Let's use an image of Leo's beautiful, brooding face for our code example.
###Code
img_path = os.path.join('datasets/kmeans/imgs/', imgs[0])
print('Using image 0: path {}'.format(img_path))
img = mpimg.imread(img_path)
# normalize the image values
img = img * 1.0 / img.max()
imgplot = plt.imshow(img)
###Output
Using image 0: path datasets/kmeans/imgs/leo_bb.png
###Markdown
An image is represented here as a three-dimensional array of floating-point numbers, which can take values from 0 to 1. If we look at ``img.shape``, we'll find that the first two dimensions are x and y, and then the last dimension is the color channel. There are three color channels (one each for red, green, and blue). A set of three channel values at a single (x, y)-coordinate is referred to as a "pixel".We're going to use a small random sample of 10% of the image to find our clusters.
###Code
print('Image shape: {}'.format(img.shape))
width, height, num_channels = img.shape
num_pixels = width * height
num_sample_pixels = num_pixels / 10
print('Sampling {} out of {} pixels'.format(num_sample_pixels, num_pixels))
###Output
Image shape: (462, 621, 3)
Sampling 28690 out of 286902 pixels
###Markdown
Next we need to reshape the image data into a single long array of pixels (instead of a two-dimensional array of pixels) in order to take our sample.
###Code
img_reshaped = np.reshape(img, (num_pixels, num_channels))
img_sample = shuffle(img_reshaped, random_state=0)
###Output
_____no_output_____
###Markdown
Now that we have some data, let's construct our k-means object and feed it some data. It will find the best k clusters, as determined by a distance function.
###Code
# We're going to try to find the 20 colors which best represent the colors in the picture.
K = 20
t0 = time()
kmeans = KMeans(n_clusters=K, random_state=0)
# actually running kmeans is super simple!
kmeans.fit(img_sample)
print("K-means clustering complete. Elapsed time: {} seconds".format(time() - t0))
###Output
_____no_output_____
###Markdown
The center of each cluster represents a color that is significant in the image. We can grab the values of these colors from `kmeans.cluster_centers_`. We can also call `kmeans.predict()` to match each pixel in the image to the closest color, which will let us know the size of each cluster (and also serve as a way to quantize the image)
###Code
# There are K cluster centers, each of which is a RGB color
kmeans.cluster_centers_
t0 = time()
labels = kmeans.predict(img_reshaped)
print("k-means labeling complete. Elapsed time: {} seconds".format(time() - t0))
# construct a histogram of the points in each cluster
n, bins, patches = plt.hist(labels, bins=range(K+1))
# a bit of magic to color the bins the right color
for p, color in zip(patches, kmeans.cluster_centers_):
plt.setp(p, 'facecolor', color)
###Output
_____no_output_____
###Markdown
As you can tell from the above histogram, the most dominant color in the scene is the background color, followed by a large drop down to the foreground colors. This isn't that surprising, since visually we can see that the space is mostly filled with the background color -- that's why it's called the "background"!Now, let's redraw the scene using only the cluster centers! This can be used for image compression, since we only need to store the index into the list of cluster centers and the colors corresponding to each center, rather than the colors corresponding to each pixel in the original image.
###Code
quantized_img = np.zeros(img.shape)
for i in range(width):
for j in range(height):
# We need to do some math here to get the correct
# index position in the labels array
index = i * height + j
quantized_img[i][j] = kmeans.cluster_centers_[labels[index]]
quantized_imgplot = plt.imshow(quantized_img)
###Output
_____no_output_____ |
demo/restorer_basic_tutorial_zh-CN.ipynb | ###Markdown
MMEditing 基础教程欢迎来到MMEditing! 这是 MMEditing 的官方 Colab 教程。在本教程中,您将学习如何使用 MMEditing 中提供的 API 训练和测试恢复器。这是训练和测试现有模型的快速指南。如果您想基于 MMEditing 开发自己的模型并了解有关代码结构的更多信息,请参阅我们的[综合教程]()。[](https://colab.research.google.com/github/open-mmlab/mmedit/blob/main/demo/restorer_basic_tutorial.ipynb) 安装MMEditingMMEditing 可以分三步安装:1. 安装兼容的 PyTorch 版本(你需要使用 `nvcc -V` 检查你的 CUDA 版本)。2. 安装预编译的MMCV3. 克隆并安装MMEditing步骤如下所示:
###Code
# Check nvcc version
!nvcc -V
# Check GCC version (MMEditing needs gcc >= 5.0)
!gcc --version
# 安装依赖:(使用 cu101 因为 colab 有 CUDA 11.0)
!pip install -U torch==1.7.0+cu110 torchvision==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
# 安装 mmcv-full 这样我们就可以使用 CUDA 运算符
!pip install mmcv-full==1.3.5 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
# 克隆 MMEditing
!rm -rf mmediting
!git clone https://github.com/open-mmlab/mmediting.git
%cd mmediting
# 安装 MMEditing
!pip install -r requirements.txt
!pip install -v -e .
###Output
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.7.0+cu110
[?25l Downloading https://download.pytorch.org/whl/cu110/torch-1.7.0%2Bcu110-cp37-cp37m-linux_x86_64.whl (1137.1MB)
[K |███████████████████████▌ | 834.1MB 1.3MB/s eta 0:03:50tcmalloc: large alloc 1147494400 bytes == 0x56458d07a000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x5645536367f0 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x5645537373e1 0x5645536976a9 0x564553602cc4 0x5645535c3559 0x5645536374f8 0x5645535c430a 0x5645536323b5 0x5645536317ad 0x5645535c43ea 0x5645536323b5 0x5645535c430a 0x5645536323b5
[K |█████████████████████████████▊ | 1055.7MB 1.2MB/s eta 0:01:07tcmalloc: large alloc 1434370048 bytes == 0x5645d16d0000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x5645536367f0 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x5645537373e1 0x5645536976a9 0x564553602cc4 0x5645535c3559 0x5645536374f8 0x5645535c430a 0x5645536323b5 0x5645536317ad 0x5645535c43ea 0x5645536323b5 0x5645535c430a 0x5645536323b5
[K |████████████████████████████████| 1137.1MB 1.1MB/s eta 0:00:01tcmalloc: large alloc 1421369344 bytes == 0x564626ebc000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645535c430a 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536314ae 0x5645535c4a81
[K |████████████████████████████████| 1137.1MB 16kB/s
[?25hCollecting torchvision==0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/1d/3f/4f45249458a0dee85bff7acf4a2ac6177708253f1f318fcf6ee230fb864f/torchvision-0.8.0-cp37-cp37m-manylinux1_x86_64.whl (11.8MB)
[K |████████████████████████████████| 11.8MB 254kB/s
[?25hRequirement already satisfied, skipping upgrade: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (3.7.4.3)
Collecting dataclasses
Downloading https://files.pythonhosted.org/packages/26/2f/1095cdc2868052dd1e64520f7c0d5c8c550ad297e944e641dbf1ffbb9a5d/dataclasses-0.6-py3-none-any.whl
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (1.19.5)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (0.16.0)
Requirement already satisfied, skipping upgrade: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.8.0) (7.1.2)
[31mERROR: torchtext 0.10.0 has requirement torch==1.9.0, but you'll have torch 1.7.0+cu110 which is incompatible.[0m
Installing collected packages: dataclasses, torch, torchvision
Found existing installation: torch 1.9.0+cu102
Uninstalling torch-1.9.0+cu102:
Successfully uninstalled torch-1.9.0+cu102
Found existing installation: torchvision 0.10.0+cu102
Uninstalling torchvision-0.10.0+cu102:
Successfully uninstalled torchvision-0.10.0+cu102
Successfully installed dataclasses-0.6 torch-1.7.0+cu110 torchvision-0.8.0
Looking in links: https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
Collecting mmcv-full==1.3.5
[?25l Downloading https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/mmcv_full-1.3.5-cp37-cp37m-manylinux1_x86_64.whl (31.1MB)
[K |████████████████████████████████| 31.1MB 107kB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (1.19.5)
Collecting addict
Downloading https://files.pythonhosted.org/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl
Collecting yapf
[?25l Downloading https://files.pythonhosted.org/packages/5f/0d/8814e79eb865eab42d95023b58b650d01dec6f8ea87fc9260978b1bf2167/yapf-0.31.0-py2.py3-none-any.whl (185kB)
[K |████████████████████████████████| 194kB 33.0MB/s
[?25hRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (7.1.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (3.13)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (4.1.2.30)
Installing collected packages: addict, yapf, mmcv-full
Successfully installed addict-2.4.0 mmcv-full-1.3.5 yapf-0.31.0
Cloning into 'mmediting'...
remote: Enumerating objects: 7162, done.[K
remote: Counting objects: 100% (1367/1367), done.[K
remote: Compressing objects: 100% (793/793), done.[K
remote: Total 7162 (delta 827), reused 928 (delta 554), pack-reused 5795[K
Receiving objects: 100% (7162/7162), 5.02 MiB | 32.14 MiB/s, done.
Resolving deltas: 100% (4826/4826), done.
/content/mmediting
Requirement already satisfied: lmdb in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 1)) (0.99)
Requirement already satisfied: mmcv-full>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 2)) (1.3.5)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 3)) (0.16.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 4)) (2.5.0)
Requirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 5)) (0.31.0)
Collecting codecov
Downloading https://files.pythonhosted.org/packages/93/9f/bbea5b6231308458963cb5c067bc5643da9949689702fa5a382714b59699/codecov-2.1.11-py2.py3-none-any.whl
Collecting flake8
[?25l Downloading https://files.pythonhosted.org/packages/fc/80/35a0716e5d5101e643404dabd20f07f5528a21f3ef4032d31a49c913237b/flake8-3.9.2-py2.py3-none-any.whl (73kB)
[K |████████████████████████████████| 81kB 9.7MB/s
[?25hCollecting interrogate
Downloading https://files.pythonhosted.org/packages/cd/6d/ce3ac440b13c1b36b323a0eab191499a902adade3cc11b18078c07af3e6e/interrogate-1.4.0-py3-none-any.whl
Collecting isort==4.3.21
[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)
[K |████████████████████████████████| 51kB 7.5MB/s
[?25hCollecting onnxruntime
[?25l Downloading https://files.pythonhosted.org/packages/f9/76/3d0f8bb2776961c7335693df06eccf8d099e48fa6fb552c7546867192603/onnxruntime-1.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5MB)
[K |████████████████████████████████| 4.5MB 37.4MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 6)) (3.6.4)
Collecting pytest-runner
Downloading https://files.pythonhosted.org/packages/f4/f5/6605d73bf3f4c198915872111b10c4b3c2dccd8485f47b7290ceef037190/pytest_runner-5.3.1-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (1.19.5)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (7.1.2)
Requirement already satisfied: addict in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (2.4.0)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (4.1.2.30)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (3.13)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (2.5.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (2.4.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (1.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (1.1.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (3.2.2)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.36.2)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.34.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.8.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (2.23.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.4.4)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (57.0.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.12.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (3.12.4)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (3.3.4)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.31.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.0.1)
Requirement already satisfied: coverage in /usr/local/lib/python3.7/dist-packages (from codecov->-r requirements/tests.txt (line 1)) (3.7.1)
Collecting pyflakes<2.4.0,>=2.3.0
[?25l Downloading https://files.pythonhosted.org/packages/6c/11/2a745612f1d3cbbd9c69ba14b1b43a35a2f5c3c81cd0124508c52c64307f/pyflakes-2.3.1-py2.py3-none-any.whl (68kB)
[K |████████████████████████████████| 71kB 9.8MB/s
[?25hCollecting pycodestyle<2.8.0,>=2.7.0
[?25l Downloading https://files.pythonhosted.org/packages/de/cc/227251b1471f129bc35e966bb0fceb005969023926d744139642d847b7ae/pycodestyle-2.7.0-py2.py3-none-any.whl (41kB)
[K |████████████████████████████████| 51kB 8.7MB/s
[?25hCollecting mccabe<0.7.0,>=0.6.0
Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (4.5.0)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.8.9)
Collecting colorama
Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (7.1.2)
Requirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.10.2)
Requirement already satisfied: py in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (1.10.0)
Requirement already satisfied: attrs in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (21.2.0)
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.7/dist-packages (from onnxruntime->-r requirements/tests.txt (line 5)) (1.12)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (8.8.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (0.7.1)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (1.15.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (1.4.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image->-r requirements/runtime.txt (line 3)) (4.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (1.3.1)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (2021.5.30)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements/runtime.txt (line 4)) (1.3.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (4.2.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->flake8->-r requirements/tests.txt (line 2)) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->flake8->-r requirements/tests.txt (line 2)) (3.7.4.3)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements/runtime.txt (line 4)) (3.1.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (0.4.8)
Installing collected packages: codecov, pyflakes, pycodestyle, mccabe, flake8, colorama, interrogate, isort, onnxruntime, pytest-runner
Successfully installed codecov-2.1.11 colorama-0.4.4 flake8-3.9.2 interrogate-1.4.0 isort-4.3.21 mccabe-0.6.1 onnxruntime-1.8.0 pycodestyle-2.7.0 pyflakes-2.3.1 pytest-runner-5.3.1
Created temporary directory: /tmp/pip-ephem-wheel-cache-hu6xvjxh
Created temporary directory: /tmp/pip-req-tracker-zk5q0q3z
Created requirements tracker '/tmp/pip-req-tracker-zk5q0q3z'
Created temporary directory: /tmp/pip-install-vr_vpseo
Obtaining file:///content/mmediting
Added file:///content/mmediting to build tracker '/tmp/pip-req-tracker-zk5q0q3z'
Running setup.py (path:/content/mmediting/setup.py) egg_info for package from file:///content/mmediting
Running command python setup.py egg_info
running egg_info
creating mmedit.egg-info
writing mmedit.egg-info/PKG-INFO
writing dependency_links to mmedit.egg-info/dependency_links.txt
writing requirements to mmedit.egg-info/requires.txt
writing top-level names to mmedit.egg-info/top_level.txt
writing manifest file 'mmedit.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'mmedit/VERSION'
warning: no files found matching 'mmedit/model_zoo.yml'
warning: no files found matching '*.py' under directory 'mmedit/configs'
warning: no files found matching '*.yml' under directory 'mmedit/configs'
warning: no files found matching '*.sh' under directory 'mmedit/tools'
warning: no files found matching '*.py' under directory 'mmedit/tools'
adding license file 'LICENSE'
writing manifest file 'mmedit.egg-info/SOURCES.txt'
Source in /content/mmediting has version 0.8.0, which satisfies requirement mmedit==0.8.0 from file:///content/mmediting
Removed mmedit==0.8.0 from file:///content/mmediting from build tracker '/tmp/pip-req-tracker-zk5q0q3z'
Requirement already satisfied: lmdb in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.99)
Requirement already satisfied: mmcv-full>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (1.3.5)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.16.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (2.5.0)
Requirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.31.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (3.13)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (4.1.2.30)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (7.1.2)
Requirement already satisfied: addict in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (2.4.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (1.19.5)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (2.5.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (1.4.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (2.4.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (3.2.2)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (1.1.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.34.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (3.12.4)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.0.1)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.36.2)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.31.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.12.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (2.23.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (3.3.4)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.6.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (57.0.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image->mmedit==0.8.0) (4.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (1.3.1)
Requirement already satisfied: six>=1.5.2 in /usr/local/lib/python3.7/dist-packages (from grpcio>=1.24.3->tensorboard->mmedit==0.8.0) (1.15.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (4.7.2)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (2021.5.30)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->mmedit==0.8.0) (4.5.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->mmedit==0.8.0) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->mmedit==0.8.0) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->mmedit==0.8.0) (3.7.4.3)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->mmedit==0.8.0) (3.1.1)
Installing collected packages: mmedit
Running setup.py develop for mmedit
Running command /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/content/mmediting/setup.py'"'"'; __file__='"'"'/content/mmediting/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
running develop
running egg_info
writing mmedit.egg-info/PKG-INFO
writing dependency_links to mmedit.egg-info/dependency_links.txt
writing requirements to mmedit.egg-info/requires.txt
writing top-level names to mmedit.egg-info/top_level.txt
reading manifest template 'MANIFEST.in'
warning: no files found matching 'mmedit/VERSION'
warning: no files found matching 'mmedit/model_zoo.yml'
warning: no files found matching '*.py' under directory 'mmedit/configs'
warning: no files found matching '*.yml' under directory 'mmedit/configs'
warning: no files found matching '*.sh' under directory 'mmedit/tools'
warning: no files found matching '*.py' under directory 'mmedit/tools'
adding license file 'LICENSE'
writing manifest file 'mmedit.egg-info/SOURCES.txt'
running build_ext
Creating /usr/local/lib/python3.7/dist-packages/mmedit.egg-link (link to .)
Adding mmedit 0.8.0 to easy-install.pth file
Installed /content/mmediting
Successfully installed mmedit
Cleaning up...
Removed build tracker '/tmp/pip-req-tracker-zk5q0q3z'
###Markdown
下载此演示所需的材料在这个演示中,我们将需要一些数据和配置文件。我们将下载并放入 `./demo_files/`
###Code
!wget https://download.openmmlab.com/mmediting/demo_files.zip # 下载文件
!unzip demo_files # 解压
###Output
--2021-07-01 11:59:48-- https://download.openmmlab.com/mmediting/demo_files.zip
Resolving download.openmmlab.com (download.openmmlab.com)... 47.252.96.35
Connecting to download.openmmlab.com (download.openmmlab.com)|47.252.96.35|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19215781 (18M) [application/zip]
Saving to: ‘demo_files.zip’
demo_files.zip 100%[===================>] 18.33M 6.00MB/s in 3.1s
2021-07-01 11:59:52 (6.00 MB/s) - ‘demo_files.zip’ saved [19215781/19215781]
Archive: demo_files.zip
creating: demo_files/
inflating: demo_files/demo_config_EDVR.py
inflating: demo_files/demo_config_BasicVSR.py
creating: demo_files/lq_sequences/
creating: demo_files/lq_sequences/calendar/
inflating: demo_files/lq_sequences/calendar/00000006.png
inflating: demo_files/lq_sequences/calendar/00000007.png
inflating: demo_files/lq_sequences/calendar/00000010.png
inflating: demo_files/lq_sequences/calendar/00000004.png
inflating: demo_files/lq_sequences/calendar/00000003.png
inflating: demo_files/lq_sequences/calendar/00000001.png
inflating: demo_files/lq_sequences/calendar/00000000.png
inflating: demo_files/lq_sequences/calendar/00000009.png
inflating: demo_files/lq_sequences/calendar/00000008.png
inflating: demo_files/lq_sequences/calendar/00000002.png
inflating: demo_files/lq_sequences/calendar/00000005.png
creating: demo_files/lq_sequences/city/
inflating: demo_files/lq_sequences/city/00000006.png
inflating: demo_files/lq_sequences/city/00000007.png
inflating: demo_files/lq_sequences/city/00000010.png
inflating: demo_files/lq_sequences/city/00000004.png
inflating: demo_files/lq_sequences/city/00000003.png
inflating: demo_files/lq_sequences/city/00000001.png
inflating: demo_files/lq_sequences/city/00000000.png
inflating: demo_files/lq_sequences/city/00000009.png
inflating: demo_files/lq_sequences/city/00000008.png
inflating: demo_files/lq_sequences/city/00000002.png
inflating: demo_files/lq_sequences/city/00000005.png
creating: demo_files/lq_sequences/.ipynb_checkpoints/
creating: demo_files/gt_images/
inflating: demo_files/gt_images/bird.png
inflating: demo_files/gt_images/woman.png
inflating: demo_files/gt_images/head.png
inflating: demo_files/gt_images/baby.png
inflating: demo_files/gt_images/butterfly.png
inflating: demo_files/demo_config_SRCNN.py
creating: demo_files/lq_images/
extracting: demo_files/lq_images/bird.png
extracting: demo_files/lq_images/woman.png
extracting: demo_files/lq_images/head.png
extracting: demo_files/lq_images/baby.png
extracting: demo_files/lq_images/butterfly.png
creating: demo_files/gt_sequences/
creating: demo_files/gt_sequences/calendar/
inflating: demo_files/gt_sequences/calendar/00000006.png
inflating: demo_files/gt_sequences/calendar/00000007.png
inflating: demo_files/gt_sequences/calendar/00000010.png
inflating: demo_files/gt_sequences/calendar/00000004.png
inflating: demo_files/gt_sequences/calendar/00000003.png
inflating: demo_files/gt_sequences/calendar/00000001.png
inflating: demo_files/gt_sequences/calendar/00000000.png
inflating: demo_files/gt_sequences/calendar/00000009.png
inflating: demo_files/gt_sequences/calendar/00000008.png
inflating: demo_files/gt_sequences/calendar/00000002.png
inflating: demo_files/gt_sequences/calendar/00000005.png
creating: demo_files/gt_sequences/city/
inflating: demo_files/gt_sequences/city/00000006.png
inflating: demo_files/gt_sequences/city/00000007.png
inflating: demo_files/gt_sequences/city/00000010.png
inflating: demo_files/gt_sequences/city/00000004.png
inflating: demo_files/gt_sequences/city/00000003.png
inflating: demo_files/gt_sequences/city/00000001.png
inflating: demo_files/gt_sequences/city/00000000.png
inflating: demo_files/gt_sequences/city/00000009.png
inflating: demo_files/gt_sequences/city/00000008.png
inflating: demo_files/gt_sequences/city/00000002.png
inflating: demo_files/gt_sequences/city/00000005.png
creating: demo_files/gt_sequences/.ipynb_checkpoints/
creating: demo_files/.ipynb_checkpoints/
###Markdown
使用预训练的图像恢复器进行推理您可以使用 “restoration_demo.py” 轻松地使用预训练的恢复器对单个图像进行推理。您需要的是1. `CONFIG_FILE`:你要使用的 restorer 对应的配置文件。它指定您要使用的模型。2. `CHECKPOINT_FILE`:预训练模型权重文件的路径。3. `IMAGE_FILE`:输入图像的路径。4. `SAVE_FILE`:您要存储输出图像的位置。5. `imshow`:是否显示图片。(可选的)6. `GPU_ID`:您想使用哪个 GPU。(可选的)获得所有这些详细信息后,您可以直接使用以下命令:```python demo/restoration_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${IMAGE_FILE} ${SAVE_FILE} [--imshow] [--device ${GPU_ID}]```**注:** 1. 配置文件位于 `./configs`。2. 我们支持从 url 加载权重文件。您可以到相应页面(例如[这里](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan))获取预训练模型的url。---我们现在将使用 `SRCNN` 和 `ESRGAN` 作为示例。
###Code
# SRCNN
!python demo/restoration_demo.py ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth ./demo_files/lq_images/bird.png ./outputs/bird_SRCNN.png
# ESRGAN
!python demo/restoration_demo.py ./configs/restorers/esrgan/esrgan_x4c64b23g32_g1_400k_div2k.py https://download.openmmlab.com/mmediting/restorers/esrgan/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth ./demo_files/lq_images/bird.png ./outputs/bird_ESRGAN.png
# 检查图像是否已保存
!ls ./outputs
###Output
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth" to /root/.cache/torch/hub/checkpoints/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth
100% 83.9k/83.9k [00:00<00:00, 1.59MB/s]
2021-07-01 12:00:10,779 - mmedit - INFO - Use load_from_torchvision loader
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /root/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
100% 548M/548M [00:07<00:00, 76.0MB/s]
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/esrgan/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth" to /root/.cache/torch/hub/checkpoints/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth
100% 196M/196M [00:26<00:00, 7.61MB/s]
bird_ESRGAN.png bird_SRCNN.png
###Markdown
使用预训练的视频复原器进行推理MMEditing 也支持视频超分辨率方法,过程类似。您可以使用带有以下参数的 `restoration_video_demo.py`:1. `CONFIG_FILE`:你要使用的 restorer 对应的配置文件。它指定您要使用的模型。2. `CHECKPOINT_FILE`:预训练模型权重文件的路径。3. `INPUT_DIR`: 包含视频帧的目录。4. `OUTPUT_DIR`: 要存储输出帧的位置。5. `WINDOW_SIZE`: 使用滑动窗口方法时的窗口大小(可选)。6. `GPU_ID`: 您想使用哪个 GPU(可选)。```python demo/restoration_video_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${INPUT_DIR} ${OUTPUT_DIR} [--window_size=$WINDOW_SIZE] [--device ${GPU_ID}]```**注:** 视频超分辨率有两种不同的框架:***滑动窗口***和***循环***框架。使用 EDVR 等滑动窗口框架的方法时,需要指定 `window_size`。此值取决于您使用的模型。---我们现在将使用 `EDVR` 和 `BasicVSR` 作为示例。
###Code
# EDVR(滑动窗口框架)
!python demo/restoration_video_demo.py ./configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth demo_files/lq_sequences/calendar/ ./outputs/calendar_EDVR --window_size=5
# BasicVSR(循环框架)
!python demo/restoration_video_demo.py ./configs/restorers/basicvsr/basicvsr_reds4.py https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth demo_files/lq_sequences/calendar/ ./outputs/calendar_BasicVSR
# 检查是否保存了视频帧
!ls ./outputs/calendar_BasicVSR
###Output
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth" to /root/.cache/torch/hub/checkpoints/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth
100% 11.5M/11.5M [00:01<00:00, 8.55MB/s]
2021-07-01 12:01:09,689 - mmedit - INFO - Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/basicvsr/spynet_20210409-c6c1bd09.pth" to /root/.cache/torch/hub/checkpoints/spynet_20210409-c6c1bd09.pth
100% 5.50M/5.50M [00:00<00:00, 8.88MB/s]
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth" to /root/.cache/torch/hub/checkpoints/basicvsr_reds4_20120409-0e599677.pth
100% 24.1M/24.1M [00:02<00:00, 8.97MB/s]
The model and loaded state dict do not match exactly
missing keys in source state_dict: step_counter
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
使用配置文件在预定义的数据集上进行测试上述演示提供了一种对单个图像或视频序列进行推理的简单方法。如果要对一组图像或序列进行推理,可以使用位于 `./configs` 中的配置文件。 现有的配置文件允许您对常见数据集进行推理,例如图像超分辨率中的 `Set5` 和视频超分辨率中的 `REDS4`。您可以使用以下命令:1. `CONFIG_FILE`: 你要使用的复原器和数据集对应的配置文件2. `CHECKPOINT_FILE`: 预训练模型权重文件的路径。3. `GPU_NUM`: 用于测试的 GPU 数量。4. `RESULT_FILE`: 输出结果 pickle 文件的路径。(可选)5. `IMAGE_SAVE_PATH`: 要存储输出图像的位置。(可选)``` 单 GPU 测试python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--save-path ${IMAGE_SAVE_PATH}] 多 GPU 测试./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--save-path ${IMAGE_SAVE_PATH}]```您需要做的是修改配置文件中的 `lq_folder` 和 `gt_folder`:```test=dict( type=val_dataset_type, lq_folder='data/val_set5/Set5_bicLRx4', gt_folder='data/val_set5/Set5', pipeline=test_pipeline, scale=scale, filename_tmpl='{}'))```**注**: 某些数据集类型(例如 `SRREDSDataset`)需要一个注释文件来指定数据集的详细信息。更多细节请参考 `./mmedit/dataset/` 中的相应文件。---以下是 SRCNN 的命令。对于其他模型,您可以简单地更改配置文件和预训练模型的路径。
###Code
# 单 GPU
!python tools/test.py ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth --save-path ./outputs/
# 多 GPU
!./tools/dist_test.sh ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth 1 --save-path ./outputs/
###Output
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 62, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 73, in load_annotations
lq_paths = self.scan_folder(self.lq_folder)
File "/content/mmediting/mmedit/datasets/base_sr_dataset.py", line 39, in scan_folder
images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True))
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/path.py", line 63, in _scandir
for entry in os.scandir(dir_path):
FileNotFoundError: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/test.py", line 136, in <module>
main()
File "tools/test.py", line 73, in main
dataset = build_dataset(cfg.data.test)
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRFolderDataset: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 62, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 73, in load_annotations
lq_paths = self.scan_folder(self.lq_folder)
File "/content/mmediting/mmedit/datasets/base_sr_dataset.py", line 39, in scan_folder
images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True))
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/path.py", line 63, in _scandir
for entry in os.scandir(dir_path):
FileNotFoundError: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/test.py", line 136, in <module>
main()
File "./tools/test.py", line 73, in main
dataset = build_dataset(cfg.data.test)
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRFolderDataset: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', './tools/test.py', '--local_rank=0', './configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py', 'https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth', '--launcher', 'pytorch', '--save-path', './outputs/']' returned non-zero exit status 1.
###Markdown
在自定义数据集上进行测试当您想在自定义数据集上进行测试时,除了数据集路径之外,您还需要修改 `test_dataset_type`。 - 对于图像超分辨率,需要使用 `SRFolderDataset`- 对于视频超分辨率的滑动窗口框架(例如 EDVR、TDAN),需要使用 `SRFolderVideoDataset`。- 对于视频超分辨率的循环框架(例如 BasicVSR、IconVSR),需要使用 `SRFolderMultipleGTDataset`。这些数据集类型假定指定目录中的所有图像/序列都用于测试。文件夹结构应该是```| lq_root | sequence_1 | 000.png | 001.png | ... | sequence_2 | 000.png | ... | ...| gt_root | sequence_1 | 000.png | 001.png |... | sequence_2 | 000.png | ... | ...```我们将使用 **SRCNN**、**EDVR**、**BasicVSR** 作为示例。请注意 `test_dataset_type` 和 `data['test']` 的设置。 **SRCNN**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_SRCNN.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth --save-path ./outputs/testset_SRCNN
# 检查输出文件夹
!ls ./outputs/testset_SRCNN
###Output
Use load_from_http loader
[>>] 5/5, 8.6 task/s, elapsed: 1s, ETA: 0s
Eval-PSNR: 28.433974369836108
Eval-SSIM: 0.8099053586583066
baby.png bird.png butterfly.png head.png woman.png
###Markdown
**EDVR**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_EDVR.py https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth --save-path ./outputs/testset_EDVR
# 检查输出文件夹
!ls ./outputs/testset_EDVR
!ls ./outputs/testset_EDVR/city
###Output
Use load_from_http loader
[>>] 22/22, 2.0 task/s, elapsed: 11s, ETA: 0s
Eval-PSNR: 23.89569862011228
Eval-SSIM: 0.7667098470108678
calendar city
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
**BasicVSR**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_BasicVSR.py https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth --save-path ./outputs/testset_BasicVSR
# 检查输出文件夹
!ls ./outputs/testset_BasicVSR
!ls ./outputs/testset_BasicVSR/calendar
###Output
2021-07-01 12:02:07,780 - mmedit - INFO - Use load_from_http loader
Use load_from_http loader
The model and loaded state dict do not match exactly
missing keys in source state_dict: step_counter
[>>] 2/2, 0.2 task/s, elapsed: 11s, ETA: 0s
Eval-PSNR: 24.195768601433734
Eval-SSIM: 0.7828541339512978
calendar city
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
在预定义的数据集上训练恢复器MMEditing 使用分布式训练。以下命令可用于训练。如果要在我们的配置文件中指定的预定义数据集上进行训练,只需运行以下命令即可。```./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]```有关可选参数的更多详细信息,请参阅 `tools/train.py`。---这是一个使用 EDVR 的示例。
###Code
!./tools/dist_train.sh ./configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py 1
###Output
2021-07-01 12:02:31,961 - mmedit - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.10 (default, May 3 2021, 02:48:31) [GCC 7.5.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.0_bu.TC445_37.28845127_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0+cu110
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
- CuDNN 8.0.4
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.8.0
OpenCV: 4.1.2
MMCV: 1.3.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.0
MMEditing: 0.8.0+7f4fa79
------------------------------------------------------------
2021-07-01 12:02:31,961 - mmedit - INFO - Distributed training: True
2021-07-01 12:02:31,961 - mmedit - INFO - mmedit Version: 0.8.0
2021-07-01 12:02:31,961 - mmedit - INFO - Config:
/content/mmediting/configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py
exp_name = 'edvrm_wotsa_x4_g8_600k_reds'
# model settings
model = dict(
type='EDVR',
generator=dict(
type='EDVRNet',
in_channels=3,
out_channels=3,
mid_channels=64,
num_frames=5,
deform_groups=8,
num_blocks_extraction=5,
num_blocks_reconstruction=10,
center_frame_idx=2,
with_tsa=False),
pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='sum'))
# model training and testing settings
train_cfg = None
test_cfg = dict(metrics=['PSNR'], crop_border=0)
# dataset settings
train_dataset_type = 'SRREDSDataset'
val_dataset_type = 'SRREDSDataset'
train_pipeline = [
dict(type='GenerateFrameIndices', interval_list=[1], frames_per_clip=99),
dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
flag='unchanged'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
flag='unchanged'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(
type='Normalize',
keys=['lq', 'gt'],
mean=[0, 0, 0],
std=[1, 1, 1],
to_rgb=True),
dict(type='PairedRandomCrop', gt_patch_size=256),
dict(
type='Flip', keys=['lq', 'gt'], flip_ratio=0.5,
direction='horizontal'),
dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'),
dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5),
dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path']),
dict(type='FramesToTensor', keys=['lq', 'gt'])
]
test_pipeline = [
dict(type='GenerateFrameIndiceswithPadding', padding='reflection_circle'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
flag='unchanged'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
flag='unchanged'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(
type='Normalize',
keys=['lq', 'gt'],
mean=[0, 0, 0],
std=[1, 1, 1],
to_rgb=True),
dict(
type='Collect',
keys=['lq', 'gt'],
meta_keys=['lq_path', 'gt_path', 'key']),
dict(type='FramesToTensor', keys=['lq', 'gt'])
]
data = dict(
workers_per_gpu=4,
train_dataloader=dict(samples_per_gpu=4, drop_last=True),
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1),
train=dict(
type='RepeatDataset',
times=1000,
dataset=dict(
type=train_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=train_pipeline,
scale=4,
val_partition='REDS4',
test_mode=False)),
val=dict(
type=val_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=test_pipeline,
scale=4,
val_partition='REDS4',
test_mode=True),
test=dict(
type=val_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=test_pipeline,
scale=4,
val_partition='REDS4',
test_mode=True),
)
# optimizer
optimizers = dict(generator=dict(type='Adam', lr=4e-4, betas=(0.9, 0.999)))
# learning policy
total_iters = 600000
lr_config = dict(
policy='CosineRestart',
by_epoch=False,
periods=[150000, 150000, 150000, 150000],
restart_weights=[1, 0.5, 0.5, 0.5],
min_lr=1e-7)
checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False)
# remove gpu_collect=True in non distributed training
evaluation = dict(interval=50000, save_image=False, gpu_collect=True)
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
dict(type='TensorboardLoggerHook'),
# dict(type='PaviLoggerHook', init_kwargs=dict(project='mmedit-sr'))
])
visual_config = None
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = f'./work_dirs/{exp_name}'
load_from = None
resume_from = None
workflow = [('train', 1)]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_reds_dataset.py", line 54, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_reds_dataset.py", line 63, in load_annotations
with open(self.ann_file, 'r') as fin:
FileNotFoundError: [Errno 2] No such file or directory: 'data/REDS/meta_info_REDS_GT.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/train.py", line 145, in <module>
main()
File "./tools/train.py", line 111, in main
datasets = [build_dataset(cfg.data.train)]
File "/content/mmediting/mmedit/datasets/builder.py", line 76, in build_dataset
build_dataset(cfg['dataset'], default_args), cfg['times'])
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRREDSDataset: [Errno 2] No such file or directory: 'data/REDS/meta_info_REDS_GT.txt'
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', './tools/train.py', '--local_rank=0', './configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py', '--launcher', 'pytorch']' returned non-zero exit status 1.
###Markdown
在自定义数据集上训练复原器与您要在自己的数据集上进行测试的情况类似,您需要修改 `train_dataset_type`。您需要的数据集类型是相同的:- 对于图像超分辨率,需要使用 `SRFolderDataset`- 对于视频超分辨率的滑动窗口框架(例如 EDVR、TDAN),需要使用 `SRFolderVideoDataset`。- 对于视频超分辨率的循环框架(例如 BasicVSR、IconVSR),需要使用 `SRFolderMultipleGTDataset`。修改数据集类型和数据路径后。一切都准备好了。
###Code
# SRCNN(图像超分辨率)
!./tools/dist_train.sh ./demo_files/demo_config_SRCNN.py 1
# EDVR(视频超分辨率-滑动窗口)
!./tools/dist_train.sh ./demo_files/demo_config_EDVR.py 1
# BasicVSR(视频超分辨率 - 循环)
!./tools/dist_train.sh ./demo_files/demo_config_BasicVSR.py 1
###Output
2021-07-01 12:06:47,253 - mmedit - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.10 (default, May 3 2021, 02:48:31) [GCC 7.5.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.0_bu.TC445_37.28845127_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0+cu110
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
- CuDNN 8.0.4
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.8.0
OpenCV: 4.1.2
MMCV: 1.3.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.0
MMEditing: 0.8.0+7f4fa79
------------------------------------------------------------
2021-07-01 12:06:47,253 - mmedit - INFO - Distributed training: True
2021-07-01 12:06:47,254 - mmedit - INFO - mmedit Version: 0.8.0
2021-07-01 12:06:47,254 - mmedit - INFO - Config:
/content/mmediting/demo_files/demo_config_BasicVSR.py
exp_name = 'basicvsr_demo'
# model settings
model = dict(
type='BasicVSR',
generator=dict(
type='BasicVSRNet',
mid_channels=64,
num_blocks=30,
spynet_pretrained='https://download.openmmlab.com/mmediting/restorers/'
'basicvsr/spynet_20210409-c6c1bd09.pth'),
pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='mean'))
# model training and testing settings
train_cfg = dict(fix_iter=5000)
test_cfg = dict(metrics=['PSNR', 'SSIM'], crop_border=0)
# dataset settings
train_dataset_type = 'SRFolderMultipleGTDataset'
val_dataset_type = 'SRFolderMultipleGTDataset'
train_pipeline = [
dict(type='GenerateSegmentIndices', interval_list=[1]),
dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
channel_order='rgb'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
channel_order='rgb'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(type='PairedRandomCrop', gt_patch_size=256),
dict(
type='Flip', keys=['lq', 'gt'], flip_ratio=0.5,
direction='horizontal'),
dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'),
dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5),
dict(type='FramesToTensor', keys=['lq', 'gt']),
dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path'])
]
test_pipeline = [
dict(type='GenerateSegmentIndices', interval_list=[1]),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
channel_order='rgb'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
channel_order='rgb'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(type='FramesToTensor', keys=['lq', 'gt']),
dict(
type='Collect',
keys=['lq', 'gt'],
meta_keys=['lq_path', 'gt_path', 'key'])
]
data = dict(
workers_per_gpu=6,
train_dataloader=dict(samples_per_gpu=4, drop_last=True), # 2 gpus
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1),
# train
train=dict(
type='RepeatDataset',
times=1000,
dataset=dict(
type=train_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
num_input_frames=5,
pipeline=train_pipeline,
scale=4,
test_mode=False)),
# val
val=dict(
type=val_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
pipeline=test_pipeline,
scale=4,
test_mode=True),
# test
test=dict(
type=val_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
pipeline=test_pipeline,
scale=4,
test_mode=True),
)
# optimizer
optimizers = dict(
generator=dict(
type='Adam',
lr=2e-4,
betas=(0.9, 0.99),
paramwise_cfg=dict(custom_keys={'spynet': dict(lr_mult=0.125)})))
# learning policy
total_iters = 100
lr_config = dict(
policy='CosineRestart',
by_epoch=False,
periods=[300000],
restart_weights=[1],
min_lr=1e-7)
checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False)
# remove gpu_collect=True in non distributed training
evaluation = dict(interval=50, save_image=False, gpu_collect=True)
log_config = dict(
interval=1,
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
# dict(type='TensorboardLoggerHook'),
])
visual_config = None
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = f'./work_dirs/{exp_name}'
load_from = None
resume_from = None
workflow = [('train', 1)]
find_unused_parameters = True
2021-07-01 12:06:47,291 - mmedit - INFO - Use load_from_http loader
2021-07-01 12:06:47,569 - mmedit - INFO - Start running, host: root@fce870c778f5, work_dir: /content/mmediting/work_dirs/basicvsr_demo
2021-07-01 12:06:47,569 - mmedit - INFO - workflow: [('train', 1)], max: 100 iters
2021-07-01 12:07:14,210 - mmedit - INFO - Iter [1/100] lr_generator: 2.500e-05, eta: 0:42:52, time: 25.981, data_time: 24.045, memory: 3464, loss_pix: 0.0634, loss: 0.0634
2021-07-01 12:07:15,171 - mmedit - INFO - Iter [2/100] lr_generator: 2.500e-05, eta: 0:22:00, time: 0.961, data_time: 0.011, memory: 3518, loss_pix: 0.0556, loss: 0.0556
2021-07-01 12:07:16,052 - mmedit - INFO - Iter [3/100] lr_generator: 2.500e-05, eta: 0:14:59, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0476, loss: 0.0476
2021-07-01 12:07:16,940 - mmedit - INFO - Iter [4/100] lr_generator: 2.500e-05, eta: 0:11:29, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0673, loss: 0.0673
2021-07-01 12:07:17,829 - mmedit - INFO - Iter [5/100] lr_generator: 2.500e-05, eta: 0:09:22, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0796, loss: 0.0796
2021-07-01 12:07:18,712 - mmedit - INFO - Iter [6/100] lr_generator: 2.500e-05, eta: 0:07:57, time: 0.884, data_time: 0.004, memory: 3518, loss_pix: 0.0680, loss: 0.0680
2021-07-01 12:07:19,594 - mmedit - INFO - Iter [7/100] lr_generator: 2.500e-05, eta: 0:06:56, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0607, loss: 0.0607
2021-07-01 12:07:20,481 - mmedit - INFO - Iter [8/100] lr_generator: 2.500e-05, eta: 0:06:10, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0598, loss: 0.0598
2021-07-01 12:07:21,361 - mmedit - INFO - Iter [9/100] lr_generator: 2.500e-05, eta: 0:05:35, time: 0.880, data_time: 0.003, memory: 3518, loss_pix: 0.0664, loss: 0.0664
2021-07-01 12:07:22,274 - mmedit - INFO - Iter [10/100] lr_generator: 2.500e-05, eta: 0:05:06, time: 0.913, data_time: 0.003, memory: 3518, loss_pix: 0.0687, loss: 0.0687
2021-07-01 12:07:23,161 - mmedit - INFO - Iter [11/100] lr_generator: 2.500e-05, eta: 0:04:42, time: 0.887, data_time: 0.005, memory: 3518, loss_pix: 0.0771, loss: 0.0771
2021-07-01 12:07:24,058 - mmedit - INFO - Iter [12/100] lr_generator: 2.500e-05, eta: 0:04:22, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0521, loss: 0.0521
2021-07-01 12:07:24,944 - mmedit - INFO - Iter [13/100] lr_generator: 2.500e-05, eta: 0:04:05, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0675, loss: 0.0675
2021-07-01 12:07:25,835 - mmedit - INFO - Iter [14/100] lr_generator: 2.500e-05, eta: 0:03:51, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0515, loss: 0.0515
2021-07-01 12:07:26,723 - mmedit - INFO - Iter [15/100] lr_generator: 2.500e-05, eta: 0:03:38, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0674, loss: 0.0674
2021-07-01 12:07:27,609 - mmedit - INFO - Iter [16/100] lr_generator: 2.500e-05, eta: 0:03:26, time: 0.886, data_time: 0.003, memory: 3518, loss_pix: 0.0579, loss: 0.0579
2021-07-01 12:07:28,498 - mmedit - INFO - Iter [17/100] lr_generator: 2.500e-05, eta: 0:03:16, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0670, loss: 0.0670
2021-07-01 12:07:29,388 - mmedit - INFO - Iter [18/100] lr_generator: 2.500e-05, eta: 0:03:07, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0663, loss: 0.0663
2021-07-01 12:07:30,284 - mmedit - INFO - Iter [19/100] lr_generator: 2.500e-05, eta: 0:02:59, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0633, loss: 0.0633
2021-07-01 12:07:31,176 - mmedit - INFO - Iter [20/100] lr_generator: 2.500e-05, eta: 0:02:51, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0482, loss: 0.0482
2021-07-01 12:07:32,073 - mmedit - INFO - Iter [21/100] lr_generator: 2.500e-05, eta: 0:02:44, time: 0.898, data_time: 0.003, memory: 3518, loss_pix: 0.0743, loss: 0.0743
2021-07-01 12:07:32,966 - mmedit - INFO - Iter [22/100] lr_generator: 2.500e-05, eta: 0:02:38, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0690, loss: 0.0690
2021-07-01 12:07:33,858 - mmedit - INFO - Iter [23/100] lr_generator: 2.500e-05, eta: 0:02:32, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0437, loss: 0.0437
2021-07-01 12:07:34,750 - mmedit - INFO - Iter [24/100] lr_generator: 2.500e-05, eta: 0:02:27, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0579, loss: 0.0579
2021-07-01 12:07:35,642 - mmedit - INFO - Iter [25/100] lr_generator: 2.500e-05, eta: 0:02:22, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0781, loss: 0.0781
2021-07-01 12:07:36,533 - mmedit - INFO - Iter [26/100] lr_generator: 2.500e-05, eta: 0:02:17, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0614, loss: 0.0614
2021-07-01 12:07:37,427 - mmedit - INFO - Iter [27/100] lr_generator: 2.500e-05, eta: 0:02:13, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0688, loss: 0.0688
2021-07-01 12:07:38,321 - mmedit - INFO - Iter [28/100] lr_generator: 2.500e-05, eta: 0:02:08, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0567, loss: 0.0567
2021-07-01 12:07:39,212 - mmedit - INFO - Iter [29/100] lr_generator: 2.500e-05, eta: 0:02:04, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0846, loss: 0.0846
2021-07-01 12:07:40,104 - mmedit - INFO - Iter [30/100] lr_generator: 2.500e-05, eta: 0:02:01, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0658, loss: 0.0658
2021-07-01 12:07:40,993 - mmedit - INFO - Iter [31/100] lr_generator: 2.500e-05, eta: 0:01:57, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0502, loss: 0.0502
2021-07-01 12:07:41,885 - mmedit - INFO - Iter [32/100] lr_generator: 2.500e-05, eta: 0:01:54, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0622, loss: 0.0622
2021-07-01 12:07:42,787 - mmedit - INFO - Iter [33/100] lr_generator: 2.500e-05, eta: 0:01:50, time: 0.902, data_time: 0.003, memory: 3518, loss_pix: 0.0657, loss: 0.0657
2021-07-01 12:07:43,674 - mmedit - INFO - Iter [34/100] lr_generator: 2.500e-05, eta: 0:01:47, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0643, loss: 0.0643
2021-07-01 12:07:44,567 - mmedit - INFO - Iter [35/100] lr_generator: 2.500e-05, eta: 0:01:44, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0898, loss: 0.0898
2021-07-01 12:07:45,458 - mmedit - INFO - Iter [36/100] lr_generator: 2.500e-05, eta: 0:01:41, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0865, loss: 0.0865
2021-07-01 12:07:46,341 - mmedit - INFO - Iter [37/100] lr_generator: 2.500e-05, eta: 0:01:38, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0511, loss: 0.0511
2021-07-01 12:07:47,225 - mmedit - INFO - Iter [38/100] lr_generator: 2.500e-05, eta: 0:01:36, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0673, loss: 0.0673
2021-07-01 12:07:48,109 - mmedit - INFO - Iter [39/100] lr_generator: 2.500e-05, eta: 0:01:33, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0653, loss: 0.0653
2021-07-01 12:07:48,990 - mmedit - INFO - Iter [40/100] lr_generator: 2.500e-05, eta: 0:01:31, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0828, loss: 0.0828
2021-07-01 12:07:49,874 - mmedit - INFO - Iter [41/100] lr_generator: 2.500e-05, eta: 0:01:28, time: 0.885, data_time: 0.003, memory: 3518, loss_pix: 0.0788, loss: 0.0788
2021-07-01 12:07:50,755 - mmedit - INFO - Iter [42/100] lr_generator: 2.500e-05, eta: 0:01:26, time: 0.881, data_time: 0.005, memory: 3518, loss_pix: 0.0605, loss: 0.0605
2021-07-01 12:07:51,638 - mmedit - INFO - Iter [43/100] lr_generator: 2.500e-05, eta: 0:01:24, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0539, loss: 0.0539
2021-07-01 12:07:52,518 - mmedit - INFO - Iter [44/100] lr_generator: 2.500e-05, eta: 0:01:21, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0503, loss: 0.0503
2021-07-01 12:07:53,395 - mmedit - INFO - Iter [45/100] lr_generator: 2.500e-05, eta: 0:01:19, time: 0.877, data_time: 0.003, memory: 3518, loss_pix: 0.0475, loss: 0.0475
2021-07-01 12:07:54,273 - mmedit - INFO - Iter [46/100] lr_generator: 2.500e-05, eta: 0:01:17, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0555, loss: 0.0555
2021-07-01 12:07:55,159 - mmedit - INFO - Iter [47/100] lr_generator: 2.500e-05, eta: 0:01:15, time: 0.886, data_time: 0.007, memory: 3518, loss_pix: 0.0806, loss: 0.0806
2021-07-01 12:07:56,040 - mmedit - INFO - Iter [48/100] lr_generator: 2.500e-05, eta: 0:01:13, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0576, loss: 0.0576
2021-07-01 12:07:56,919 - mmedit - INFO - Iter [49/100] lr_generator: 2.500e-05, eta: 0:01:11, time: 0.879, data_time: 0.003, memory: 3518, loss_pix: 0.0871, loss: 0.0871
[>>] 2/2, 0.1 task/s, elapsed: 34s, ETA: 0s
2021-07-01 12:08:32,618 - mmedit - INFO - Iter(val) [50] PSNR: 20.9899, SSIM: 0.5295
2021-07-01 12:08:33,591 - mmedit - INFO - Iter [51/100] lr_generator: 2.500e-05, eta: 0:01:40, time: 35.792, data_time: 34.822, memory: 3518, loss_pix: 0.0773, loss: 0.0773
2021-07-01 12:08:34,462 - mmedit - INFO - Iter [52/100] lr_generator: 2.500e-05, eta: 0:01:37, time: 0.871, data_time: 0.003, memory: 3518, loss_pix: 0.0687, loss: 0.0687
2021-07-01 12:08:35,326 - mmedit - INFO - Iter [53/100] lr_generator: 2.500e-05, eta: 0:01:34, time: 0.864, data_time: 0.003, memory: 3518, loss_pix: 0.0665, loss: 0.0665
2021-07-01 12:08:36,192 - mmedit - INFO - Iter [54/100] lr_generator: 2.500e-05, eta: 0:01:31, time: 0.867, data_time: 0.003, memory: 3518, loss_pix: 0.0575, loss: 0.0575
2021-07-01 12:08:37,069 - mmedit - INFO - Iter [55/100] lr_generator: 2.500e-05, eta: 0:01:28, time: 0.876, data_time: 0.003, memory: 3518, loss_pix: 0.0842, loss: 0.0842
2021-07-01 12:08:37,942 - mmedit - INFO - Iter [56/100] lr_generator: 2.500e-05, eta: 0:01:25, time: 0.873, data_time: 0.003, memory: 3518, loss_pix: 0.0836, loss: 0.0836
2021-07-01 12:08:38,808 - mmedit - INFO - Iter [57/100] lr_generator: 2.500e-05, eta: 0:01:22, time: 0.867, data_time: 0.003, memory: 3518, loss_pix: 0.0580, loss: 0.0580
2021-07-01 12:08:39,679 - mmedit - INFO - Iter [58/100] lr_generator: 2.500e-05, eta: 0:01:20, time: 0.870, data_time: 0.003, memory: 3518, loss_pix: 0.0449, loss: 0.0449
2021-07-01 12:08:40,557 - mmedit - INFO - Iter [59/100] lr_generator: 2.500e-05, eta: 0:01:17, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0702, loss: 0.0702
2021-07-01 12:08:41,430 - mmedit - INFO - Iter [60/100] lr_generator: 2.500e-05, eta: 0:01:14, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0627, loss: 0.0627
2021-07-01 12:08:42,308 - mmedit - INFO - Iter [61/100] lr_generator: 2.500e-05, eta: 0:01:12, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0716, loss: 0.0716
2021-07-01 12:08:43,182 - mmedit - INFO - Iter [62/100] lr_generator: 2.500e-05, eta: 0:01:09, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0489, loss: 0.0489
2021-07-01 12:08:44,056 - mmedit - INFO - Iter [63/100] lr_generator: 2.500e-05, eta: 0:01:07, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0566, loss: 0.0566
2021-07-01 12:08:44,932 - mmedit - INFO - Iter [64/100] lr_generator: 2.500e-05, eta: 0:01:05, time: 0.875, data_time: 0.003, memory: 3518, loss_pix: 0.0597, loss: 0.0597
2021-07-01 12:08:45,819 - mmedit - INFO - Iter [65/100] lr_generator: 2.500e-05, eta: 0:01:02, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0640, loss: 0.0640
2021-07-01 12:08:46,698 - mmedit - INFO - Iter [66/100] lr_generator: 2.500e-05, eta: 0:01:00, time: 0.879, data_time: 0.003, memory: 3518, loss_pix: 0.0665, loss: 0.0665
2021-07-01 12:08:47,580 - mmedit - INFO - Iter [67/100] lr_generator: 2.500e-05, eta: 0:00:58, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0675, loss: 0.0675
2021-07-01 12:08:48,464 - mmedit - INFO - Iter [68/100] lr_generator: 2.500e-05, eta: 0:00:56, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0641, loss: 0.0641
2021-07-01 12:08:49,347 - mmedit - INFO - Iter [69/100] lr_generator: 2.500e-05, eta: 0:00:54, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0603, loss: 0.0603
2021-07-01 12:08:50,229 - mmedit - INFO - Iter [70/100] lr_generator: 2.500e-05, eta: 0:00:51, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0478, loss: 0.0478
2021-07-01 12:08:51,113 - mmedit - INFO - Iter [71/100] lr_generator: 2.500e-05, eta: 0:00:49, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0691, loss: 0.0691
2021-07-01 12:08:52,000 - mmedit - INFO - Iter [72/100] lr_generator: 2.500e-05, eta: 0:00:47, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0861, loss: 0.0861
2021-07-01 12:08:52,890 - mmedit - INFO - Iter [73/100] lr_generator: 2.500e-05, eta: 0:00:45, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0688, loss: 0.0688
2021-07-01 12:08:53,792 - mmedit - INFO - Iter [74/100] lr_generator: 2.500e-05, eta: 0:00:43, time: 0.903, data_time: 0.003, memory: 3518, loss_pix: 0.0787, loss: 0.0787
2021-07-01 12:08:54,688 - mmedit - INFO - Iter [75/100] lr_generator: 2.500e-05, eta: 0:00:41, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0744, loss: 0.0744
2021-07-01 12:08:55,582 - mmedit - INFO - Iter [76/100] lr_generator: 2.500e-05, eta: 0:00:39, time: 0.895, data_time: 0.003, memory: 3518, loss_pix: 0.0792, loss: 0.0792
2021-07-01 12:08:56,476 - mmedit - INFO - Iter [77/100] lr_generator: 2.500e-05, eta: 0:00:38, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0645, loss: 0.0645
2021-07-01 12:08:57,368 - mmedit - INFO - Iter [78/100] lr_generator: 2.500e-05, eta: 0:00:36, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0475, loss: 0.0475
2021-07-01 12:08:58,261 - mmedit - INFO - Iter [79/100] lr_generator: 2.500e-05, eta: 0:00:34, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0627, loss: 0.0627
2021-07-01 12:08:59,159 - mmedit - INFO - Iter [80/100] lr_generator: 2.500e-05, eta: 0:00:32, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0626, loss: 0.0626
2021-07-01 12:09:00,055 - mmedit - INFO - Iter [81/100] lr_generator: 2.500e-05, eta: 0:00:30, time: 0.896, data_time: 0.004, memory: 3518, loss_pix: 0.0681, loss: 0.0681
2021-07-01 12:09:00,954 - mmedit - INFO - Iter [82/100] lr_generator: 2.500e-05, eta: 0:00:28, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0671, loss: 0.0671
2021-07-01 12:09:01,860 - mmedit - INFO - Iter [83/100] lr_generator: 2.500e-05, eta: 0:00:27, time: 0.906, data_time: 0.003, memory: 3518, loss_pix: 0.0825, loss: 0.0825
2021-07-01 12:09:02,760 - mmedit - INFO - Iter [84/100] lr_generator: 2.500e-05, eta: 0:00:25, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0594, loss: 0.0594
2021-07-01 12:09:03,658 - mmedit - INFO - Iter [85/100] lr_generator: 2.500e-05, eta: 0:00:23, time: 0.898, data_time: 0.003, memory: 3518, loss_pix: 0.0446, loss: 0.0446
2021-07-01 12:09:04,555 - mmedit - INFO - Iter [86/100] lr_generator: 2.500e-05, eta: 0:00:22, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0491, loss: 0.0491
2021-07-01 12:09:05,452 - mmedit - INFO - Iter [87/100] lr_generator: 2.500e-05, eta: 0:00:20, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0450, loss: 0.0450
2021-07-01 12:09:06,351 - mmedit - INFO - Iter [88/100] lr_generator: 2.500e-05, eta: 0:00:18, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0795, loss: 0.0795
2021-07-01 12:09:07,257 - mmedit - INFO - Iter [89/100] lr_generator: 2.500e-05, eta: 0:00:17, time: 0.905, data_time: 0.003, memory: 3518, loss_pix: 0.0522, loss: 0.0522
2021-07-01 12:09:08,161 - mmedit - INFO - Iter [90/100] lr_generator: 2.500e-05, eta: 0:00:15, time: 0.904, data_time: 0.003, memory: 3518, loss_pix: 0.0588, loss: 0.0588
2021-07-01 12:09:09,063 - mmedit - INFO - Iter [91/100] lr_generator: 2.500e-05, eta: 0:00:13, time: 0.902, data_time: 0.003, memory: 3518, loss_pix: 0.0614, loss: 0.0614
2021-07-01 12:09:09,955 - mmedit - INFO - Iter [92/100] lr_generator: 2.500e-05, eta: 0:00:12, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0599, loss: 0.0599
2021-07-01 12:09:10,849 - mmedit - INFO - Iter [93/100] lr_generator: 2.500e-05, eta: 0:00:10, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0522, loss: 0.0522
2021-07-01 12:09:11,749 - mmedit - INFO - Iter [94/100] lr_generator: 2.500e-05, eta: 0:00:09, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0667, loss: 0.0667
2021-07-01 12:09:12,638 - mmedit - INFO - Iter [95/100] lr_generator: 2.500e-05, eta: 0:00:07, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0558, loss: 0.0558
2021-07-01 12:09:13,576 - mmedit - INFO - Iter [96/100] lr_generator: 2.500e-05, eta: 0:00:06, time: 0.938, data_time: 0.052, memory: 3518, loss_pix: 0.0577, loss: 0.0577
2021-07-01 12:09:14,463 - mmedit - INFO - Iter [97/100] lr_generator: 2.500e-05, eta: 0:00:04, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0574, loss: 0.0574
2021-07-01 12:09:15,351 - mmedit - INFO - Iter [98/100] lr_generator: 2.500e-05, eta: 0:00:02, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0578, loss: 0.0578
2021-07-01 12:09:16,240 - mmedit - INFO - Iter [99/100] lr_generator: 2.500e-05, eta: 0:00:01, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0556, loss: 0.0556
[>>] 2/2, 0.1 task/s, elapsed: 34s, ETA: 0s
2021-07-01 12:09:52,294 - mmedit - INFO - Saving checkpoint at 100 iterations
2021-07-01 12:09:52,433 - mmedit - INFO - Iter(val) [100] PSNR: 21.4372, SSIM: 0.5687
###Markdown
MMEditing 基础教程欢迎来到MMEditing! 这是 MMEditing 的官方 Colab 教程。在本教程中,您将学习如何使用 MMEditing 中提供的 API 训练和测试恢复器。这是训练和测试现有模型的快速指南。如果您想基于 MMEditing 开发自己的模型并了解有关代码结构的更多信息,请参阅我们的[综合教程]()。[](https://colab.research.google.com/github/open-mmlab/mmedit/blob/main/demo/restorer_basic_tutorial.ipynb) 安装MMEditingMMEditing 可以分三步安装:1. 安装兼容的 PyTorch 版本(你需要使用 `nvcc -V` 检查你的 CUDA 版本)。2. 安装预编译的MMCV3. 克隆并安装MMEditing步骤如下所示:
###Code
# Check nvcc version
!nvcc -V
# Check GCC version (MMEditing needs gcc >= 5.0)
!gcc --version
# 安装 openmim 用于安装 mmcv-full
!pip install openmim
# 安装 mmcv-full 使用 CUDA 算子
!mim install mmcv-full
# 克隆 MMEditing
!rm -rf mmediting
!git clone https://github.com/open-mmlab/mmediting.git
%cd mmediting
# 安装 MMEditing
!pip install -v -e .
###Output
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.7.0+cu110
[?25l Downloading https://download.pytorch.org/whl/cu110/torch-1.7.0%2Bcu110-cp37-cp37m-linux_x86_64.whl (1137.1MB)
[K |███████████████████████▌ | 834.1MB 1.3MB/s eta 0:03:50tcmalloc: large alloc 1147494400 bytes == 0x56458d07a000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x5645536367f0 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x5645537373e1 0x5645536976a9 0x564553602cc4 0x5645535c3559 0x5645536374f8 0x5645535c430a 0x5645536323b5 0x5645536317ad 0x5645535c43ea 0x5645536323b5 0x5645535c430a 0x5645536323b5
[K |█████████████████████████████▊ | 1055.7MB 1.2MB/s eta 0:01:07tcmalloc: large alloc 1434370048 bytes == 0x5645d16d0000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x5645536367f0 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x564553632853 0x5645536b4e36 0x5645537373e1 0x5645536976a9 0x564553602cc4 0x5645535c3559 0x5645536374f8 0x5645535c430a 0x5645536323b5 0x5645536317ad 0x5645535c43ea 0x5645536323b5 0x5645535c430a 0x5645536323b5
[K |████████████████████████████████| 1137.1MB 1.1MB/s eta 0:00:01tcmalloc: large alloc 1421369344 bytes == 0x564626ebc000 @ 0x7fce190c6615 0x5645535bfcdc 0x56455369f52a 0x5645535c2afd 0x5645536b3fed 0x564553636988 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363260e 0x5645535c430a 0x56455363260e 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536314ae 0x5645535c43ea 0x56455363332a 0x5645536314ae 0x5645535c4a81
[K |████████████████████████████████| 1137.1MB 16kB/s
[?25hCollecting torchvision==0.8.0
[?25l Downloading https://files.pythonhosted.org/packages/1d/3f/4f45249458a0dee85bff7acf4a2ac6177708253f1f318fcf6ee230fb864f/torchvision-0.8.0-cp37-cp37m-manylinux1_x86_64.whl (11.8MB)
[K |████████████████████████████████| 11.8MB 254kB/s
[?25hRequirement already satisfied, skipping upgrade: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (3.7.4.3)
Collecting dataclasses
Downloading https://files.pythonhosted.org/packages/26/2f/1095cdc2868052dd1e64520f7c0d5c8c550ad297e944e641dbf1ffbb9a5d/dataclasses-0.6-py3-none-any.whl
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (1.19.5)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.7/dist-packages (from torch==1.7.0+cu110) (0.16.0)
Requirement already satisfied, skipping upgrade: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.8.0) (7.1.2)
[31mERROR: torchtext 0.10.0 has requirement torch==1.9.0, but you'll have torch 1.7.0+cu110 which is incompatible.[0m
Installing collected packages: dataclasses, torch, torchvision
Found existing installation: torch 1.9.0+cu102
Uninstalling torch-1.9.0+cu102:
Successfully uninstalled torch-1.9.0+cu102
Found existing installation: torchvision 0.10.0+cu102
Uninstalling torchvision-0.10.0+cu102:
Successfully uninstalled torchvision-0.10.0+cu102
Successfully installed dataclasses-0.6 torch-1.7.0+cu110 torchvision-0.8.0
Looking in links: https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
Collecting mmcv-full==1.3.5
[?25l Downloading https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/mmcv_full-1.3.5-cp37-cp37m-manylinux1_x86_64.whl (31.1MB)
[K |████████████████████████████████| 31.1MB 107kB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (1.19.5)
Collecting addict
Downloading https://files.pythonhosted.org/packages/6a/00/b08f23b7d7e1e14ce01419a467b583edbb93c6cdb8654e54a9cc579cd61f/addict-2.4.0-py3-none-any.whl
Collecting yapf
[?25l Downloading https://files.pythonhosted.org/packages/5f/0d/8814e79eb865eab42d95023b58b650d01dec6f8ea87fc9260978b1bf2167/yapf-0.31.0-py2.py3-none-any.whl (185kB)
[K |████████████████████████████████| 194kB 33.0MB/s
[?25hRequirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (7.1.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (3.13)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full==1.3.5) (4.1.2.30)
Installing collected packages: addict, yapf, mmcv-full
Successfully installed addict-2.4.0 mmcv-full-1.3.5 yapf-0.31.0
Cloning into 'mmediting'...
remote: Enumerating objects: 7162, done.[K
remote: Counting objects: 100% (1367/1367), done.[K
remote: Compressing objects: 100% (793/793), done.[K
remote: Total 7162 (delta 827), reused 928 (delta 554), pack-reused 5795[K
Receiving objects: 100% (7162/7162), 5.02 MiB | 32.14 MiB/s, done.
Resolving deltas: 100% (4826/4826), done.
/content/mmediting
Requirement already satisfied: lmdb in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 1)) (0.99)
Requirement already satisfied: mmcv-full>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 2)) (1.3.5)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 3)) (0.16.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 4)) (2.5.0)
Requirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from -r requirements/runtime.txt (line 5)) (0.31.0)
Collecting codecov
Downloading https://files.pythonhosted.org/packages/93/9f/bbea5b6231308458963cb5c067bc5643da9949689702fa5a382714b59699/codecov-2.1.11-py2.py3-none-any.whl
Collecting flake8
[?25l Downloading https://files.pythonhosted.org/packages/fc/80/35a0716e5d5101e643404dabd20f07f5528a21f3ef4032d31a49c913237b/flake8-3.9.2-py2.py3-none-any.whl (73kB)
[K |████████████████████████████████| 81kB 9.7MB/s
[?25hCollecting interrogate
Downloading https://files.pythonhosted.org/packages/cd/6d/ce3ac440b13c1b36b323a0eab191499a902adade3cc11b18078c07af3e6e/interrogate-1.4.0-py3-none-any.whl
Collecting isort==4.3.21
[?25l Downloading https://files.pythonhosted.org/packages/e5/b0/c121fd1fa3419ea9bfd55c7f9c4fedfec5143208d8c7ad3ce3db6c623c21/isort-4.3.21-py2.py3-none-any.whl (42kB)
[K |████████████████████████████████| 51kB 7.5MB/s
[?25hCollecting onnxruntime
[?25l Downloading https://files.pythonhosted.org/packages/f9/76/3d0f8bb2776961c7335693df06eccf8d099e48fa6fb552c7546867192603/onnxruntime-1.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5MB)
[K |████████████████████████████████| 4.5MB 37.4MB/s
[?25hRequirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from -r requirements/tests.txt (line 6)) (3.6.4)
Collecting pytest-runner
Downloading https://files.pythonhosted.org/packages/f4/f5/6605d73bf3f4c198915872111b10c4b3c2dccd8485f47b7290ceef037190/pytest_runner-5.3.1-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (1.19.5)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (7.1.2)
Requirement already satisfied: addict in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (2.4.0)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (4.1.2.30)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->-r requirements/runtime.txt (line 2)) (3.13)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (2.5.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (2.4.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (1.4.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (1.1.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->-r requirements/runtime.txt (line 3)) (3.2.2)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.36.2)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.34.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.8.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (2.23.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.4.4)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (57.0.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (0.12.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (3.12.4)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (3.3.4)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.31.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->-r requirements/runtime.txt (line 4)) (1.0.1)
Requirement already satisfied: coverage in /usr/local/lib/python3.7/dist-packages (from codecov->-r requirements/tests.txt (line 1)) (3.7.1)
Collecting pyflakes<2.4.0,>=2.3.0
[?25l Downloading https://files.pythonhosted.org/packages/6c/11/2a745612f1d3cbbd9c69ba14b1b43a35a2f5c3c81cd0124508c52c64307f/pyflakes-2.3.1-py2.py3-none-any.whl (68kB)
[K |████████████████████████████████| 71kB 9.8MB/s
[?25hCollecting pycodestyle<2.8.0,>=2.7.0
[?25l Downloading https://files.pythonhosted.org/packages/de/cc/227251b1471f129bc35e966bb0fceb005969023926d744139642d847b7ae/pycodestyle-2.7.0-py2.py3-none-any.whl (41kB)
[K |████████████████████████████████| 51kB 8.7MB/s
[?25hCollecting mccabe<0.7.0,>=0.6.0
Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from flake8->-r requirements/tests.txt (line 2)) (4.5.0)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.8.9)
Collecting colorama
Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (7.1.2)
Requirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (0.10.2)
Requirement already satisfied: py in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (1.10.0)
Requirement already satisfied: attrs in /usr/local/lib/python3.7/dist-packages (from interrogate->-r requirements/tests.txt (line 3)) (21.2.0)
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.7/dist-packages (from onnxruntime->-r requirements/tests.txt (line 5)) (1.12)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (8.8.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (0.7.1)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (1.15.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->-r requirements/tests.txt (line 6)) (1.4.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image->-r requirements/runtime.txt (line 3)) (4.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->-r requirements/runtime.txt (line 3)) (1.3.1)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (2021.5.30)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->-r requirements/runtime.txt (line 4)) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements/runtime.txt (line 4)) (1.3.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (4.2.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->flake8->-r requirements/tests.txt (line 2)) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->flake8->-r requirements/tests.txt (line 2)) (3.7.4.3)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements/runtime.txt (line 4)) (3.1.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard->-r requirements/runtime.txt (line 4)) (0.4.8)
Installing collected packages: codecov, pyflakes, pycodestyle, mccabe, flake8, colorama, interrogate, isort, onnxruntime, pytest-runner
Successfully installed codecov-2.1.11 colorama-0.4.4 flake8-3.9.2 interrogate-1.4.0 isort-4.3.21 mccabe-0.6.1 onnxruntime-1.8.0 pycodestyle-2.7.0 pyflakes-2.3.1 pytest-runner-5.3.1
Created temporary directory: /tmp/pip-ephem-wheel-cache-hu6xvjxh
Created temporary directory: /tmp/pip-req-tracker-zk5q0q3z
Created requirements tracker '/tmp/pip-req-tracker-zk5q0q3z'
Created temporary directory: /tmp/pip-install-vr_vpseo
Obtaining file:///content/mmediting
Added file:///content/mmediting to build tracker '/tmp/pip-req-tracker-zk5q0q3z'
Running setup.py (path:/content/mmediting/setup.py) egg_info for package from file:///content/mmediting
Running command python setup.py egg_info
running egg_info
creating mmedit.egg-info
writing mmedit.egg-info/PKG-INFO
writing dependency_links to mmedit.egg-info/dependency_links.txt
writing requirements to mmedit.egg-info/requires.txt
writing top-level names to mmedit.egg-info/top_level.txt
writing manifest file 'mmedit.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'mmedit/VERSION'
warning: no files found matching 'mmedit/model_zoo.yml'
warning: no files found matching '*.py' under directory 'mmedit/configs'
warning: no files found matching '*.yml' under directory 'mmedit/configs'
warning: no files found matching '*.sh' under directory 'mmedit/tools'
warning: no files found matching '*.py' under directory 'mmedit/tools'
adding license file 'LICENSE'
writing manifest file 'mmedit.egg-info/SOURCES.txt'
Source in /content/mmediting has version 0.8.0, which satisfies requirement mmedit==0.8.0 from file:///content/mmediting
Removed mmedit==0.8.0 from file:///content/mmediting from build tracker '/tmp/pip-req-tracker-zk5q0q3z'
Requirement already satisfied: lmdb in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.99)
Requirement already satisfied: mmcv-full>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (1.3.5)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.16.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (2.5.0)
Requirement already satisfied: yapf in /usr/local/lib/python3.7/dist-packages (from mmedit==0.8.0) (0.31.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (3.13)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (4.1.2.30)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (7.1.2)
Requirement already satisfied: addict in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (2.4.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mmcv-full>=1.2.0->mmedit==0.8.0) (1.19.5)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (2.5.1)
Requirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (1.4.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (2.4.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (3.2.2)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image->mmedit==0.8.0) (1.1.1)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.34.1)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (3.12.4)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.0.1)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.36.2)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.31.0)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.12.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (2.23.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (1.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (3.3.4)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (0.6.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->mmedit==0.8.0) (57.0.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image->mmedit==0.8.0) (4.4.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->mmedit==0.8.0) (1.3.1)
Requirement already satisfied: six>=1.5.2 in /usr/local/lib/python3.7/dist-packages (from grpcio>=1.24.3->tensorboard->mmedit==0.8.0) (1.15.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (4.7.2)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->mmedit==0.8.0) (2021.5.30)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->mmedit==0.8.0) (4.5.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->mmedit==0.8.0) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard->mmedit==0.8.0) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->mmedit==0.8.0) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->mmedit==0.8.0) (3.7.4.3)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->mmedit==0.8.0) (3.1.1)
Installing collected packages: mmedit
Running setup.py develop for mmedit
Running command /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/content/mmediting/setup.py'"'"'; __file__='"'"'/content/mmediting/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
running develop
running egg_info
writing mmedit.egg-info/PKG-INFO
writing dependency_links to mmedit.egg-info/dependency_links.txt
writing requirements to mmedit.egg-info/requires.txt
writing top-level names to mmedit.egg-info/top_level.txt
reading manifest template 'MANIFEST.in'
warning: no files found matching 'mmedit/VERSION'
warning: no files found matching 'mmedit/model_zoo.yml'
warning: no files found matching '*.py' under directory 'mmedit/configs'
warning: no files found matching '*.yml' under directory 'mmedit/configs'
warning: no files found matching '*.sh' under directory 'mmedit/tools'
warning: no files found matching '*.py' under directory 'mmedit/tools'
adding license file 'LICENSE'
writing manifest file 'mmedit.egg-info/SOURCES.txt'
running build_ext
Creating /usr/local/lib/python3.7/dist-packages/mmedit.egg-link (link to .)
Adding mmedit 0.8.0 to easy-install.pth file
Installed /content/mmediting
Successfully installed mmedit
Cleaning up...
Removed build tracker '/tmp/pip-req-tracker-zk5q0q3z'
###Markdown
下载此演示所需的材料在这个演示中,我们将需要一些数据和配置文件。我们将下载并放入 `./demo_files/`
###Code
!wget https://download.openmmlab.com/mmediting/demo_files.zip # 下载文件
!unzip demo_files # 解压
# 将数据复制到 data/Set5 以备后用
!mkdir data
!mkdir data/val_set5
!cp -r demo_files/lq_images data/val_set5/Set5_bicLRx4
!cp -r demo_files/gt_images data/val_set5/Set5
###Output
--2021-07-01 11:59:48-- https://download.openmmlab.com/mmediting/demo_files.zip
Resolving download.openmmlab.com (download.openmmlab.com)... 47.252.96.35
Connecting to download.openmmlab.com (download.openmmlab.com)|47.252.96.35|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19215781 (18M) [application/zip]
Saving to: ‘demo_files.zip’
demo_files.zip 100%[===================>] 18.33M 6.00MB/s in 3.1s
2021-07-01 11:59:52 (6.00 MB/s) - ‘demo_files.zip’ saved [19215781/19215781]
Archive: demo_files.zip
creating: demo_files/
inflating: demo_files/demo_config_EDVR.py
inflating: demo_files/demo_config_BasicVSR.py
creating: demo_files/lq_sequences/
creating: demo_files/lq_sequences/calendar/
inflating: demo_files/lq_sequences/calendar/00000006.png
inflating: demo_files/lq_sequences/calendar/00000007.png
inflating: demo_files/lq_sequences/calendar/00000010.png
inflating: demo_files/lq_sequences/calendar/00000004.png
inflating: demo_files/lq_sequences/calendar/00000003.png
inflating: demo_files/lq_sequences/calendar/00000001.png
inflating: demo_files/lq_sequences/calendar/00000000.png
inflating: demo_files/lq_sequences/calendar/00000009.png
inflating: demo_files/lq_sequences/calendar/00000008.png
inflating: demo_files/lq_sequences/calendar/00000002.png
inflating: demo_files/lq_sequences/calendar/00000005.png
creating: demo_files/lq_sequences/city/
inflating: demo_files/lq_sequences/city/00000006.png
inflating: demo_files/lq_sequences/city/00000007.png
inflating: demo_files/lq_sequences/city/00000010.png
inflating: demo_files/lq_sequences/city/00000004.png
inflating: demo_files/lq_sequences/city/00000003.png
inflating: demo_files/lq_sequences/city/00000001.png
inflating: demo_files/lq_sequences/city/00000000.png
inflating: demo_files/lq_sequences/city/00000009.png
inflating: demo_files/lq_sequences/city/00000008.png
inflating: demo_files/lq_sequences/city/00000002.png
inflating: demo_files/lq_sequences/city/00000005.png
creating: demo_files/lq_sequences/.ipynb_checkpoints/
creating: demo_files/gt_images/
inflating: demo_files/gt_images/bird.png
inflating: demo_files/gt_images/woman.png
inflating: demo_files/gt_images/head.png
inflating: demo_files/gt_images/baby.png
inflating: demo_files/gt_images/butterfly.png
inflating: demo_files/demo_config_SRCNN.py
creating: demo_files/lq_images/
extracting: demo_files/lq_images/bird.png
extracting: demo_files/lq_images/woman.png
extracting: demo_files/lq_images/head.png
extracting: demo_files/lq_images/baby.png
extracting: demo_files/lq_images/butterfly.png
creating: demo_files/gt_sequences/
creating: demo_files/gt_sequences/calendar/
inflating: demo_files/gt_sequences/calendar/00000006.png
inflating: demo_files/gt_sequences/calendar/00000007.png
inflating: demo_files/gt_sequences/calendar/00000010.png
inflating: demo_files/gt_sequences/calendar/00000004.png
inflating: demo_files/gt_sequences/calendar/00000003.png
inflating: demo_files/gt_sequences/calendar/00000001.png
inflating: demo_files/gt_sequences/calendar/00000000.png
inflating: demo_files/gt_sequences/calendar/00000009.png
inflating: demo_files/gt_sequences/calendar/00000008.png
inflating: demo_files/gt_sequences/calendar/00000002.png
inflating: demo_files/gt_sequences/calendar/00000005.png
creating: demo_files/gt_sequences/city/
inflating: demo_files/gt_sequences/city/00000006.png
inflating: demo_files/gt_sequences/city/00000007.png
inflating: demo_files/gt_sequences/city/00000010.png
inflating: demo_files/gt_sequences/city/00000004.png
inflating: demo_files/gt_sequences/city/00000003.png
inflating: demo_files/gt_sequences/city/00000001.png
inflating: demo_files/gt_sequences/city/00000000.png
inflating: demo_files/gt_sequences/city/00000009.png
inflating: demo_files/gt_sequences/city/00000008.png
inflating: demo_files/gt_sequences/city/00000002.png
inflating: demo_files/gt_sequences/city/00000005.png
creating: demo_files/gt_sequences/.ipynb_checkpoints/
creating: demo_files/.ipynb_checkpoints/
###Markdown
使用预训练的图像恢复器进行推理您可以使用 “restoration_demo.py” 轻松地使用预训练的恢复器对单个图像进行推理。您需要的是1. `CONFIG_FILE`:你要使用的 restorer 对应的配置文件。它指定您要使用的模型。2. `CHECKPOINT_FILE`:预训练模型权重文件的路径。3. `IMAGE_FILE`:输入图像的路径。4. `SAVE_FILE`:您要存储输出图像的位置。5. `imshow`:是否显示图片。(可选的)6. `GPU_ID`:您想使用哪个 GPU。(可选的)获得所有这些详细信息后,您可以直接使用以下命令:```python demo/restoration_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${IMAGE_FILE} ${SAVE_FILE} [--imshow] [--device ${GPU_ID}]```**注:** 1. 配置文件位于 `./configs`。2. 我们支持从 url 加载权重文件。您可以到相应页面(例如[这里](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan))获取预训练模型的url。---我们现在将使用 `SRCNN` 和 `ESRGAN` 作为示例。
###Code
# SRCNN
!python demo/restoration_demo.py ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth ./demo_files/lq_images/bird.png ./outputs/bird_SRCNN.png
# ESRGAN
!python demo/restoration_demo.py ./configs/restorers/esrgan/esrgan_x4c64b23g32_g1_400k_div2k.py https://download.openmmlab.com/mmediting/restorers/esrgan/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth ./demo_files/lq_images/bird.png ./outputs/bird_ESRGAN.png
# 检查图像是否已保存
!ls ./outputs
###Output
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth" to /root/.cache/torch/hub/checkpoints/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth
100% 83.9k/83.9k [00:00<00:00, 1.59MB/s]
2021-07-01 12:00:10,779 - mmedit - INFO - Use load_from_torchvision loader
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /root/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
100% 548M/548M [00:07<00:00, 76.0MB/s]
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/esrgan/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth" to /root/.cache/torch/hub/checkpoints/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth
100% 196M/196M [00:26<00:00, 7.61MB/s]
bird_ESRGAN.png bird_SRCNN.png
###Markdown
使用预训练的视频复原器进行推理MMEditing 也支持视频超分辨率方法,过程类似。您可以使用带有以下参数的 `restoration_video_demo.py`:1. `CONFIG_FILE`:你要使用的 restorer 对应的配置文件。它指定您要使用的模型。2. `CHECKPOINT_FILE`:预训练模型权重文件的路径。3. `INPUT_DIR`: 包含视频帧的目录。4. `OUTPUT_DIR`: 要存储输出帧的位置。5. `WINDOW_SIZE`: 使用滑动窗口方法时的窗口大小(可选)。6. `GPU_ID`: 您想使用哪个 GPU(可选)。```python demo/restoration_video_demo.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${INPUT_DIR} ${OUTPUT_DIR} [--window_size=$WINDOW_SIZE] [--device ${GPU_ID}]```**注:** 视频超分辨率有两种不同的框架:***滑动窗口***和***循环***框架。使用 EDVR 等滑动窗口框架的方法时,需要指定 `window_size`。此值取决于您使用的模型。---我们现在将使用 `EDVR` 和 `BasicVSR` 作为示例。
###Code
# EDVR(滑动窗口框架)
!python demo/restoration_video_demo.py ./configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth demo_files/lq_sequences/calendar/ ./outputs/calendar_EDVR --window_size=5
# BasicVSR(循环框架)
!python demo/restoration_video_demo.py ./configs/restorers/basicvsr/basicvsr_reds4.py https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth demo_files/lq_sequences/calendar/ ./outputs/calendar_BasicVSR
# 检查是否保存了视频帧
!ls ./outputs/calendar_BasicVSR
###Output
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth" to /root/.cache/torch/hub/checkpoints/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth
100% 11.5M/11.5M [00:01<00:00, 8.55MB/s]
2021-07-01 12:01:09,689 - mmedit - INFO - Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/basicvsr/spynet_20210409-c6c1bd09.pth" to /root/.cache/torch/hub/checkpoints/spynet_20210409-c6c1bd09.pth
100% 5.50M/5.50M [00:00<00:00, 8.88MB/s]
Use load_from_http loader
Downloading: "https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth" to /root/.cache/torch/hub/checkpoints/basicvsr_reds4_20120409-0e599677.pth
100% 24.1M/24.1M [00:02<00:00, 8.97MB/s]
The model and loaded state dict do not match exactly
missing keys in source state_dict: step_counter
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
使用配置文件在预定义的数据集上进行测试上述演示提供了一种对单个图像或视频序列进行推理的简单方法。如果要对一组图像或序列进行推理,可以使用位于 `./configs` 中的配置文件。 现有的配置文件允许您对常见数据集进行推理,例如图像超分辨率中的 `Set5` 和视频超分辨率中的 `REDS4`。您可以使用以下命令:1. `CONFIG_FILE`: 你要使用的复原器和数据集对应的配置文件2. `CHECKPOINT_FILE`: 预训练模型权重文件的路径。3. `GPU_NUM`: 用于测试的 GPU 数量。4. `RESULT_FILE`: 输出结果 pickle 文件的路径。(可选)5. `IMAGE_SAVE_PATH`: 要存储输出图像的位置。(可选)``` 单 GPU 测试python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--save-path ${IMAGE_SAVE_PATH}] 多 GPU 测试./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--save-path ${IMAGE_SAVE_PATH}]```您需要做的是修改配置文件中的 `lq_folder` 和 `gt_folder`:```test=dict( type=val_dataset_type, lq_folder='data/val_set5/Set5_bicLRx4', gt_folder='data/val_set5/Set5', pipeline=test_pipeline, scale=scale, filename_tmpl='{}'))```**注**: 某些数据集类型(例如 `SRREDSDataset`)需要一个注释文件来指定数据集的详细信息。更多细节请参考 `./mmedit/dataset/` 中的相应文件。---以下是 SRCNN 的命令。对于其他模型,您可以简单地更改配置文件和预训练模型的路径。
###Code
# 单 GPU
!python tools/test.py ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth --save-path ./outputs/
# 多 GPU
!./tools/dist_test.sh ./configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth 1 --save-path ./outputs/
###Output
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 62, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 73, in load_annotations
lq_paths = self.scan_folder(self.lq_folder)
File "/content/mmediting/mmedit/datasets/base_sr_dataset.py", line 39, in scan_folder
images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True))
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/path.py", line 63, in _scandir
for entry in os.scandir(dir_path):
FileNotFoundError: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/test.py", line 136, in <module>
main()
File "tools/test.py", line 73, in main
dataset = build_dataset(cfg.data.test)
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRFolderDataset: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 62, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_folder_dataset.py", line 73, in load_annotations
lq_paths = self.scan_folder(self.lq_folder)
File "/content/mmediting/mmedit/datasets/base_sr_dataset.py", line 39, in scan_folder
images = list(scandir(path, suffix=IMG_EXTENSIONS, recursive=True))
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/path.py", line 63, in _scandir
for entry in os.scandir(dir_path):
FileNotFoundError: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/test.py", line 136, in <module>
main()
File "./tools/test.py", line 73, in main
dataset = build_dataset(cfg.data.test)
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRFolderDataset: [Errno 2] No such file or directory: 'data/val_set5/Set5_bicLRx4'
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', './tools/test.py', '--local_rank=0', './configs/restorers/srcnn/srcnn_x4k915_g1_1000k_div2k.py', 'https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth', '--launcher', 'pytorch', '--save-path', './outputs/']' returned non-zero exit status 1.
###Markdown
在自定义数据集上进行测试当您想在自定义数据集上进行测试时,除了数据集路径之外,您还需要修改 `test_dataset_type`。 - 对于图像超分辨率,需要使用 `SRFolderDataset`- 对于视频超分辨率的滑动窗口框架(例如 EDVR、TDAN),需要使用 `SRFolderVideoDataset`。- 对于视频超分辨率的循环框架(例如 BasicVSR、IconVSR),需要使用 `SRFolderMultipleGTDataset`。这些数据集类型假定指定目录中的所有图像/序列都用于测试。文件夹结构应该是```| lq_root | sequence_1 | 000.png | 001.png | ... | sequence_2 | 000.png | ... | ...| gt_root | sequence_1 | 000.png | 001.png |... | sequence_2 | 000.png | ... | ...```我们将使用 **SRCNN**、**EDVR**、**BasicVSR** 作为示例。请注意 `test_dataset_type` 和 `data['test']` 的设置。 **SRCNN**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_SRCNN.py https://download.openmmlab.com/mmediting/restorers/srcnn/srcnn_x4k915_1x16_1000k_div2k_20200608-4186f232.pth --save-path ./outputs/testset_SRCNN
# 检查输出文件夹
!ls ./outputs/testset_SRCNN
###Output
Use load_from_http loader
[>>] 5/5, 8.6 task/s, elapsed: 1s, ETA: 0s
Eval-PSNR: 28.433974369836108
Eval-SSIM: 0.8099053586583066
baby.png bird.png butterfly.png head.png woman.png
###Markdown
**EDVR**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_EDVR.py https://download.openmmlab.com/mmediting/restorers/edvr/edvrm_wotsa_x4_8x4_600k_reds_20200522-0570e567.pth --save-path ./outputs/testset_EDVR
# 检查输出文件夹
!ls ./outputs/testset_EDVR
!ls ./outputs/testset_EDVR/city
###Output
Use load_from_http loader
[>>] 22/22, 2.0 task/s, elapsed: 11s, ETA: 0s
Eval-PSNR: 23.89569862011228
Eval-SSIM: 0.7667098470108678
calendar city
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
**BasicVSR**
###Code
# 单 GPU(Colab 只有一个 GPU)
!python tools/test.py ./demo_files/demo_config_BasicVSR.py https://download.openmmlab.com/mmediting/restorers/basicvsr/basicvsr_reds4_20120409-0e599677.pth --save-path ./outputs/testset_BasicVSR
# 检查输出文件夹
!ls ./outputs/testset_BasicVSR
!ls ./outputs/testset_BasicVSR/calendar
###Output
2021-07-01 12:02:07,780 - mmedit - INFO - Use load_from_http loader
Use load_from_http loader
The model and loaded state dict do not match exactly
missing keys in source state_dict: step_counter
[>>] 2/2, 0.2 task/s, elapsed: 11s, ETA: 0s
Eval-PSNR: 24.195768601433734
Eval-SSIM: 0.7828541339512978
calendar city
00000000.png 00000003.png 00000006.png 00000009.png
00000001.png 00000004.png 00000007.png 00000010.png
00000002.png 00000005.png 00000008.png
###Markdown
在预定义的数据集上训练恢复器MMEditing 使用分布式训练。以下命令可用于训练。如果要在我们的配置文件中指定的预定义数据集上进行训练,只需运行以下命令即可。```./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]```有关可选参数的更多详细信息,请参阅 `tools/train.py`。---这是一个使用 EDVR 的示例。
###Code
!./tools/dist_train.sh ./configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py 1
###Output
2021-07-01 12:02:31,961 - mmedit - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.10 (default, May 3 2021, 02:48:31) [GCC 7.5.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.0_bu.TC445_37.28845127_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0+cu110
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
- CuDNN 8.0.4
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.8.0
OpenCV: 4.1.2
MMCV: 1.3.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.0
MMEditing: 0.8.0+7f4fa79
------------------------------------------------------------
2021-07-01 12:02:31,961 - mmedit - INFO - Distributed training: True
2021-07-01 12:02:31,961 - mmedit - INFO - mmedit Version: 0.8.0
2021-07-01 12:02:31,961 - mmedit - INFO - Config:
/content/mmediting/configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py
exp_name = 'edvrm_wotsa_x4_g8_600k_reds'
# model settings
model = dict(
type='EDVR',
generator=dict(
type='EDVRNet',
in_channels=3,
out_channels=3,
mid_channels=64,
num_frames=5,
deform_groups=8,
num_blocks_extraction=5,
num_blocks_reconstruction=10,
center_frame_idx=2,
with_tsa=False),
pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='sum'))
# model training and testing settings
train_cfg = None
test_cfg = dict(metrics=['PSNR'], crop_border=0)
# dataset settings
train_dataset_type = 'SRREDSDataset'
val_dataset_type = 'SRREDSDataset'
train_pipeline = [
dict(type='GenerateFrameIndices', interval_list=[1], frames_per_clip=99),
dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
flag='unchanged'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
flag='unchanged'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(
type='Normalize',
keys=['lq', 'gt'],
mean=[0, 0, 0],
std=[1, 1, 1],
to_rgb=True),
dict(type='PairedRandomCrop', gt_patch_size=256),
dict(
type='Flip', keys=['lq', 'gt'], flip_ratio=0.5,
direction='horizontal'),
dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'),
dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5),
dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path']),
dict(type='FramesToTensor', keys=['lq', 'gt'])
]
test_pipeline = [
dict(type='GenerateFrameIndiceswithPadding', padding='reflection_circle'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
flag='unchanged'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
flag='unchanged'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(
type='Normalize',
keys=['lq', 'gt'],
mean=[0, 0, 0],
std=[1, 1, 1],
to_rgb=True),
dict(
type='Collect',
keys=['lq', 'gt'],
meta_keys=['lq_path', 'gt_path', 'key']),
dict(type='FramesToTensor', keys=['lq', 'gt'])
]
data = dict(
workers_per_gpu=4,
train_dataloader=dict(samples_per_gpu=4, drop_last=True),
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1),
train=dict(
type='RepeatDataset',
times=1000,
dataset=dict(
type=train_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=train_pipeline,
scale=4,
val_partition='REDS4',
test_mode=False)),
val=dict(
type=val_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=test_pipeline,
scale=4,
val_partition='REDS4',
test_mode=True),
test=dict(
type=val_dataset_type,
lq_folder='data/REDS/train_sharp_bicubic/X4',
gt_folder='data/REDS/train_sharp',
ann_file='data/REDS/meta_info_REDS_GT.txt',
num_input_frames=5,
pipeline=test_pipeline,
scale=4,
val_partition='REDS4',
test_mode=True),
)
# optimizer
optimizers = dict(generator=dict(type='Adam', lr=4e-4, betas=(0.9, 0.999)))
# learning policy
total_iters = 600000
lr_config = dict(
policy='CosineRestart',
by_epoch=False,
periods=[150000, 150000, 150000, 150000],
restart_weights=[1, 0.5, 0.5, 0.5],
min_lr=1e-7)
checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False)
# remove gpu_collect=True in non distributed training
evaluation = dict(interval=50000, save_image=False, gpu_collect=True)
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
dict(type='TensorboardLoggerHook'),
# dict(type='PaviLoggerHook', init_kwargs=dict(project='mmedit-sr'))
])
visual_config = None
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = f'./work_dirs/{exp_name}'
load_from = None
resume_from = None
workflow = [('train', 1)]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 51, in build_from_cfg
return obj_cls(**args)
File "/content/mmediting/mmedit/datasets/sr_reds_dataset.py", line 54, in __init__
self.data_infos = self.load_annotations()
File "/content/mmediting/mmedit/datasets/sr_reds_dataset.py", line 63, in load_annotations
with open(self.ann_file, 'r') as fin:
FileNotFoundError: [Errno 2] No such file or directory: 'data/REDS/meta_info_REDS_GT.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./tools/train.py", line 145, in <module>
main()
File "./tools/train.py", line 111, in main
datasets = [build_dataset(cfg.data.train)]
File "/content/mmediting/mmedit/datasets/builder.py", line 76, in build_dataset
build_dataset(cfg['dataset'], default_args), cfg['times'])
File "/content/mmediting/mmedit/datasets/builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 54, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: SRREDSDataset: [Errno 2] No such file or directory: 'data/REDS/meta_info_REDS_GT.txt'
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', './tools/train.py', '--local_rank=0', './configs/restorers/edvr/edvrm_wotsa_x4_g8_600k_reds.py', '--launcher', 'pytorch']' returned non-zero exit status 1.
###Markdown
在自定义数据集上训练复原器与您要在自己的数据集上进行测试的情况类似,您需要修改 `train_dataset_type`。您需要的数据集类型是相同的:- 对于图像超分辨率,需要使用 `SRFolderDataset`- 对于视频超分辨率的滑动窗口框架(例如 EDVR、TDAN),需要使用 `SRFolderVideoDataset`。- 对于视频超分辨率的循环框架(例如 BasicVSR、IconVSR),需要使用 `SRFolderMultipleGTDataset`。修改数据集类型和数据路径后。一切都准备好了。
###Code
# SRCNN(图像超分辨率)
!./tools/dist_train.sh ./demo_files/demo_config_SRCNN.py 1
# EDVR(视频超分辨率-滑动窗口)
!./tools/dist_train.sh ./demo_files/demo_config_EDVR.py 1
# BasicVSR(视频超分辨率 - 循环)
!./tools/dist_train.sh ./demo_files/demo_config_BasicVSR.py 1
###Output
2021-07-01 12:06:47,253 - mmedit - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.10 (default, May 3 2021, 02:48:31) [GCC 7.5.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.0_bu.TC445_37.28845127_0
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0+cu110
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
- CuDNN 8.0.4
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.8.0
OpenCV: 4.1.2
MMCV: 1.3.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.0
MMEditing: 0.8.0+7f4fa79
------------------------------------------------------------
2021-07-01 12:06:47,253 - mmedit - INFO - Distributed training: True
2021-07-01 12:06:47,254 - mmedit - INFO - mmedit Version: 0.8.0
2021-07-01 12:06:47,254 - mmedit - INFO - Config:
/content/mmediting/demo_files/demo_config_BasicVSR.py
exp_name = 'basicvsr_demo'
# model settings
model = dict(
type='BasicVSR',
generator=dict(
type='BasicVSRNet',
mid_channels=64,
num_blocks=30,
spynet_pretrained='https://download.openmmlab.com/mmediting/restorers/'
'basicvsr/spynet_20210409-c6c1bd09.pth'),
pixel_loss=dict(type='CharbonnierLoss', loss_weight=1.0, reduction='mean'))
# model training and testing settings
train_cfg = dict(fix_iter=5000)
test_cfg = dict(metrics=['PSNR', 'SSIM'], crop_border=0)
# dataset settings
train_dataset_type = 'SRFolderMultipleGTDataset'
val_dataset_type = 'SRFolderMultipleGTDataset'
train_pipeline = [
dict(type='GenerateSegmentIndices', interval_list=[1]),
dict(type='TemporalReverse', keys='lq_path', reverse_ratio=0),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
channel_order='rgb'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
channel_order='rgb'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(type='PairedRandomCrop', gt_patch_size=256),
dict(
type='Flip', keys=['lq', 'gt'], flip_ratio=0.5,
direction='horizontal'),
dict(type='Flip', keys=['lq', 'gt'], flip_ratio=0.5, direction='vertical'),
dict(type='RandomTransposeHW', keys=['lq', 'gt'], transpose_ratio=0.5),
dict(type='FramesToTensor', keys=['lq', 'gt']),
dict(type='Collect', keys=['lq', 'gt'], meta_keys=['lq_path', 'gt_path'])
]
test_pipeline = [
dict(type='GenerateSegmentIndices', interval_list=[1]),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='lq',
channel_order='rgb'),
dict(
type='LoadImageFromFileList',
io_backend='disk',
key='gt',
channel_order='rgb'),
dict(type='RescaleToZeroOne', keys=['lq', 'gt']),
dict(type='FramesToTensor', keys=['lq', 'gt']),
dict(
type='Collect',
keys=['lq', 'gt'],
meta_keys=['lq_path', 'gt_path', 'key'])
]
data = dict(
workers_per_gpu=6,
train_dataloader=dict(samples_per_gpu=4, drop_last=True), # 2 gpus
val_dataloader=dict(samples_per_gpu=1),
test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1),
# train
train=dict(
type='RepeatDataset',
times=1000,
dataset=dict(
type=train_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
num_input_frames=5,
pipeline=train_pipeline,
scale=4,
test_mode=False)),
# val
val=dict(
type=val_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
pipeline=test_pipeline,
scale=4,
test_mode=True),
# test
test=dict(
type=val_dataset_type,
lq_folder='./demo_files/lq_sequences',
gt_folder='./demo_files/gt_sequences',
pipeline=test_pipeline,
scale=4,
test_mode=True),
)
# optimizer
optimizers = dict(
generator=dict(
type='Adam',
lr=2e-4,
betas=(0.9, 0.99),
paramwise_cfg=dict(custom_keys={'spynet': dict(lr_mult=0.125)})))
# learning policy
total_iters = 100
lr_config = dict(
policy='CosineRestart',
by_epoch=False,
periods=[300000],
restart_weights=[1],
min_lr=1e-7)
checkpoint_config = dict(interval=5000, save_optimizer=True, by_epoch=False)
# remove gpu_collect=True in non distributed training
evaluation = dict(interval=50, save_image=False, gpu_collect=True)
log_config = dict(
interval=1,
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
# dict(type='TensorboardLoggerHook'),
])
visual_config = None
# runtime settings
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = f'./work_dirs/{exp_name}'
load_from = None
resume_from = None
workflow = [('train', 1)]
find_unused_parameters = True
2021-07-01 12:06:47,291 - mmedit - INFO - Use load_from_http loader
2021-07-01 12:06:47,569 - mmedit - INFO - Start running, host: root@fce870c778f5, work_dir: /content/mmediting/work_dirs/basicvsr_demo
2021-07-01 12:06:47,569 - mmedit - INFO - workflow: [('train', 1)], max: 100 iters
2021-07-01 12:07:14,210 - mmedit - INFO - Iter [1/100] lr_generator: 2.500e-05, eta: 0:42:52, time: 25.981, data_time: 24.045, memory: 3464, loss_pix: 0.0634, loss: 0.0634
2021-07-01 12:07:15,171 - mmedit - INFO - Iter [2/100] lr_generator: 2.500e-05, eta: 0:22:00, time: 0.961, data_time: 0.011, memory: 3518, loss_pix: 0.0556, loss: 0.0556
2021-07-01 12:07:16,052 - mmedit - INFO - Iter [3/100] lr_generator: 2.500e-05, eta: 0:14:59, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0476, loss: 0.0476
2021-07-01 12:07:16,940 - mmedit - INFO - Iter [4/100] lr_generator: 2.500e-05, eta: 0:11:29, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0673, loss: 0.0673
2021-07-01 12:07:17,829 - mmedit - INFO - Iter [5/100] lr_generator: 2.500e-05, eta: 0:09:22, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0796, loss: 0.0796
2021-07-01 12:07:18,712 - mmedit - INFO - Iter [6/100] lr_generator: 2.500e-05, eta: 0:07:57, time: 0.884, data_time: 0.004, memory: 3518, loss_pix: 0.0680, loss: 0.0680
2021-07-01 12:07:19,594 - mmedit - INFO - Iter [7/100] lr_generator: 2.500e-05, eta: 0:06:56, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0607, loss: 0.0607
2021-07-01 12:07:20,481 - mmedit - INFO - Iter [8/100] lr_generator: 2.500e-05, eta: 0:06:10, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0598, loss: 0.0598
2021-07-01 12:07:21,361 - mmedit - INFO - Iter [9/100] lr_generator: 2.500e-05, eta: 0:05:35, time: 0.880, data_time: 0.003, memory: 3518, loss_pix: 0.0664, loss: 0.0664
2021-07-01 12:07:22,274 - mmedit - INFO - Iter [10/100] lr_generator: 2.500e-05, eta: 0:05:06, time: 0.913, data_time: 0.003, memory: 3518, loss_pix: 0.0687, loss: 0.0687
2021-07-01 12:07:23,161 - mmedit - INFO - Iter [11/100] lr_generator: 2.500e-05, eta: 0:04:42, time: 0.887, data_time: 0.005, memory: 3518, loss_pix: 0.0771, loss: 0.0771
2021-07-01 12:07:24,058 - mmedit - INFO - Iter [12/100] lr_generator: 2.500e-05, eta: 0:04:22, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0521, loss: 0.0521
2021-07-01 12:07:24,944 - mmedit - INFO - Iter [13/100] lr_generator: 2.500e-05, eta: 0:04:05, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0675, loss: 0.0675
2021-07-01 12:07:25,835 - mmedit - INFO - Iter [14/100] lr_generator: 2.500e-05, eta: 0:03:51, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0515, loss: 0.0515
2021-07-01 12:07:26,723 - mmedit - INFO - Iter [15/100] lr_generator: 2.500e-05, eta: 0:03:38, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0674, loss: 0.0674
2021-07-01 12:07:27,609 - mmedit - INFO - Iter [16/100] lr_generator: 2.500e-05, eta: 0:03:26, time: 0.886, data_time: 0.003, memory: 3518, loss_pix: 0.0579, loss: 0.0579
2021-07-01 12:07:28,498 - mmedit - INFO - Iter [17/100] lr_generator: 2.500e-05, eta: 0:03:16, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0670, loss: 0.0670
2021-07-01 12:07:29,388 - mmedit - INFO - Iter [18/100] lr_generator: 2.500e-05, eta: 0:03:07, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0663, loss: 0.0663
2021-07-01 12:07:30,284 - mmedit - INFO - Iter [19/100] lr_generator: 2.500e-05, eta: 0:02:59, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0633, loss: 0.0633
2021-07-01 12:07:31,176 - mmedit - INFO - Iter [20/100] lr_generator: 2.500e-05, eta: 0:02:51, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0482, loss: 0.0482
2021-07-01 12:07:32,073 - mmedit - INFO - Iter [21/100] lr_generator: 2.500e-05, eta: 0:02:44, time: 0.898, data_time: 0.003, memory: 3518, loss_pix: 0.0743, loss: 0.0743
2021-07-01 12:07:32,966 - mmedit - INFO - Iter [22/100] lr_generator: 2.500e-05, eta: 0:02:38, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0690, loss: 0.0690
2021-07-01 12:07:33,858 - mmedit - INFO - Iter [23/100] lr_generator: 2.500e-05, eta: 0:02:32, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0437, loss: 0.0437
2021-07-01 12:07:34,750 - mmedit - INFO - Iter [24/100] lr_generator: 2.500e-05, eta: 0:02:27, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0579, loss: 0.0579
2021-07-01 12:07:35,642 - mmedit - INFO - Iter [25/100] lr_generator: 2.500e-05, eta: 0:02:22, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0781, loss: 0.0781
2021-07-01 12:07:36,533 - mmedit - INFO - Iter [26/100] lr_generator: 2.500e-05, eta: 0:02:17, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0614, loss: 0.0614
2021-07-01 12:07:37,427 - mmedit - INFO - Iter [27/100] lr_generator: 2.500e-05, eta: 0:02:13, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0688, loss: 0.0688
2021-07-01 12:07:38,321 - mmedit - INFO - Iter [28/100] lr_generator: 2.500e-05, eta: 0:02:08, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0567, loss: 0.0567
2021-07-01 12:07:39,212 - mmedit - INFO - Iter [29/100] lr_generator: 2.500e-05, eta: 0:02:04, time: 0.891, data_time: 0.003, memory: 3518, loss_pix: 0.0846, loss: 0.0846
2021-07-01 12:07:40,104 - mmedit - INFO - Iter [30/100] lr_generator: 2.500e-05, eta: 0:02:01, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0658, loss: 0.0658
2021-07-01 12:07:40,993 - mmedit - INFO - Iter [31/100] lr_generator: 2.500e-05, eta: 0:01:57, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0502, loss: 0.0502
2021-07-01 12:07:41,885 - mmedit - INFO - Iter [32/100] lr_generator: 2.500e-05, eta: 0:01:54, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0622, loss: 0.0622
2021-07-01 12:07:42,787 - mmedit - INFO - Iter [33/100] lr_generator: 2.500e-05, eta: 0:01:50, time: 0.902, data_time: 0.003, memory: 3518, loss_pix: 0.0657, loss: 0.0657
2021-07-01 12:07:43,674 - mmedit - INFO - Iter [34/100] lr_generator: 2.500e-05, eta: 0:01:47, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0643, loss: 0.0643
2021-07-01 12:07:44,567 - mmedit - INFO - Iter [35/100] lr_generator: 2.500e-05, eta: 0:01:44, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0898, loss: 0.0898
2021-07-01 12:07:45,458 - mmedit - INFO - Iter [36/100] lr_generator: 2.500e-05, eta: 0:01:41, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0865, loss: 0.0865
2021-07-01 12:07:46,341 - mmedit - INFO - Iter [37/100] lr_generator: 2.500e-05, eta: 0:01:38, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0511, loss: 0.0511
2021-07-01 12:07:47,225 - mmedit - INFO - Iter [38/100] lr_generator: 2.500e-05, eta: 0:01:36, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0673, loss: 0.0673
2021-07-01 12:07:48,109 - mmedit - INFO - Iter [39/100] lr_generator: 2.500e-05, eta: 0:01:33, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0653, loss: 0.0653
2021-07-01 12:07:48,990 - mmedit - INFO - Iter [40/100] lr_generator: 2.500e-05, eta: 0:01:31, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0828, loss: 0.0828
2021-07-01 12:07:49,874 - mmedit - INFO - Iter [41/100] lr_generator: 2.500e-05, eta: 0:01:28, time: 0.885, data_time: 0.003, memory: 3518, loss_pix: 0.0788, loss: 0.0788
2021-07-01 12:07:50,755 - mmedit - INFO - Iter [42/100] lr_generator: 2.500e-05, eta: 0:01:26, time: 0.881, data_time: 0.005, memory: 3518, loss_pix: 0.0605, loss: 0.0605
2021-07-01 12:07:51,638 - mmedit - INFO - Iter [43/100] lr_generator: 2.500e-05, eta: 0:01:24, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0539, loss: 0.0539
2021-07-01 12:07:52,518 - mmedit - INFO - Iter [44/100] lr_generator: 2.500e-05, eta: 0:01:21, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0503, loss: 0.0503
2021-07-01 12:07:53,395 - mmedit - INFO - Iter [45/100] lr_generator: 2.500e-05, eta: 0:01:19, time: 0.877, data_time: 0.003, memory: 3518, loss_pix: 0.0475, loss: 0.0475
2021-07-01 12:07:54,273 - mmedit - INFO - Iter [46/100] lr_generator: 2.500e-05, eta: 0:01:17, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0555, loss: 0.0555
2021-07-01 12:07:55,159 - mmedit - INFO - Iter [47/100] lr_generator: 2.500e-05, eta: 0:01:15, time: 0.886, data_time: 0.007, memory: 3518, loss_pix: 0.0806, loss: 0.0806
2021-07-01 12:07:56,040 - mmedit - INFO - Iter [48/100] lr_generator: 2.500e-05, eta: 0:01:13, time: 0.881, data_time: 0.003, memory: 3518, loss_pix: 0.0576, loss: 0.0576
2021-07-01 12:07:56,919 - mmedit - INFO - Iter [49/100] lr_generator: 2.500e-05, eta: 0:01:11, time: 0.879, data_time: 0.003, memory: 3518, loss_pix: 0.0871, loss: 0.0871
[>>] 2/2, 0.1 task/s, elapsed: 34s, ETA: 0s
2021-07-01 12:08:32,618 - mmedit - INFO - Iter(val) [50] PSNR: 20.9899, SSIM: 0.5295
2021-07-01 12:08:33,591 - mmedit - INFO - Iter [51/100] lr_generator: 2.500e-05, eta: 0:01:40, time: 35.792, data_time: 34.822, memory: 3518, loss_pix: 0.0773, loss: 0.0773
2021-07-01 12:08:34,462 - mmedit - INFO - Iter [52/100] lr_generator: 2.500e-05, eta: 0:01:37, time: 0.871, data_time: 0.003, memory: 3518, loss_pix: 0.0687, loss: 0.0687
2021-07-01 12:08:35,326 - mmedit - INFO - Iter [53/100] lr_generator: 2.500e-05, eta: 0:01:34, time: 0.864, data_time: 0.003, memory: 3518, loss_pix: 0.0665, loss: 0.0665
2021-07-01 12:08:36,192 - mmedit - INFO - Iter [54/100] lr_generator: 2.500e-05, eta: 0:01:31, time: 0.867, data_time: 0.003, memory: 3518, loss_pix: 0.0575, loss: 0.0575
2021-07-01 12:08:37,069 - mmedit - INFO - Iter [55/100] lr_generator: 2.500e-05, eta: 0:01:28, time: 0.876, data_time: 0.003, memory: 3518, loss_pix: 0.0842, loss: 0.0842
2021-07-01 12:08:37,942 - mmedit - INFO - Iter [56/100] lr_generator: 2.500e-05, eta: 0:01:25, time: 0.873, data_time: 0.003, memory: 3518, loss_pix: 0.0836, loss: 0.0836
2021-07-01 12:08:38,808 - mmedit - INFO - Iter [57/100] lr_generator: 2.500e-05, eta: 0:01:22, time: 0.867, data_time: 0.003, memory: 3518, loss_pix: 0.0580, loss: 0.0580
2021-07-01 12:08:39,679 - mmedit - INFO - Iter [58/100] lr_generator: 2.500e-05, eta: 0:01:20, time: 0.870, data_time: 0.003, memory: 3518, loss_pix: 0.0449, loss: 0.0449
2021-07-01 12:08:40,557 - mmedit - INFO - Iter [59/100] lr_generator: 2.500e-05, eta: 0:01:17, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0702, loss: 0.0702
2021-07-01 12:08:41,430 - mmedit - INFO - Iter [60/100] lr_generator: 2.500e-05, eta: 0:01:14, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0627, loss: 0.0627
2021-07-01 12:08:42,308 - mmedit - INFO - Iter [61/100] lr_generator: 2.500e-05, eta: 0:01:12, time: 0.878, data_time: 0.003, memory: 3518, loss_pix: 0.0716, loss: 0.0716
2021-07-01 12:08:43,182 - mmedit - INFO - Iter [62/100] lr_generator: 2.500e-05, eta: 0:01:09, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0489, loss: 0.0489
2021-07-01 12:08:44,056 - mmedit - INFO - Iter [63/100] lr_generator: 2.500e-05, eta: 0:01:07, time: 0.874, data_time: 0.003, memory: 3518, loss_pix: 0.0566, loss: 0.0566
2021-07-01 12:08:44,932 - mmedit - INFO - Iter [64/100] lr_generator: 2.500e-05, eta: 0:01:05, time: 0.875, data_time: 0.003, memory: 3518, loss_pix: 0.0597, loss: 0.0597
2021-07-01 12:08:45,819 - mmedit - INFO - Iter [65/100] lr_generator: 2.500e-05, eta: 0:01:02, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0640, loss: 0.0640
2021-07-01 12:08:46,698 - mmedit - INFO - Iter [66/100] lr_generator: 2.500e-05, eta: 0:01:00, time: 0.879, data_time: 0.003, memory: 3518, loss_pix: 0.0665, loss: 0.0665
2021-07-01 12:08:47,580 - mmedit - INFO - Iter [67/100] lr_generator: 2.500e-05, eta: 0:00:58, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0675, loss: 0.0675
2021-07-01 12:08:48,464 - mmedit - INFO - Iter [68/100] lr_generator: 2.500e-05, eta: 0:00:56, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0641, loss: 0.0641
2021-07-01 12:08:49,347 - mmedit - INFO - Iter [69/100] lr_generator: 2.500e-05, eta: 0:00:54, time: 0.883, data_time: 0.003, memory: 3518, loss_pix: 0.0603, loss: 0.0603
2021-07-01 12:08:50,229 - mmedit - INFO - Iter [70/100] lr_generator: 2.500e-05, eta: 0:00:51, time: 0.882, data_time: 0.003, memory: 3518, loss_pix: 0.0478, loss: 0.0478
2021-07-01 12:08:51,113 - mmedit - INFO - Iter [71/100] lr_generator: 2.500e-05, eta: 0:00:49, time: 0.884, data_time: 0.003, memory: 3518, loss_pix: 0.0691, loss: 0.0691
2021-07-01 12:08:52,000 - mmedit - INFO - Iter [72/100] lr_generator: 2.500e-05, eta: 0:00:47, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0861, loss: 0.0861
2021-07-01 12:08:52,890 - mmedit - INFO - Iter [73/100] lr_generator: 2.500e-05, eta: 0:00:45, time: 0.890, data_time: 0.003, memory: 3518, loss_pix: 0.0688, loss: 0.0688
2021-07-01 12:08:53,792 - mmedit - INFO - Iter [74/100] lr_generator: 2.500e-05, eta: 0:00:43, time: 0.903, data_time: 0.003, memory: 3518, loss_pix: 0.0787, loss: 0.0787
2021-07-01 12:08:54,688 - mmedit - INFO - Iter [75/100] lr_generator: 2.500e-05, eta: 0:00:41, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0744, loss: 0.0744
2021-07-01 12:08:55,582 - mmedit - INFO - Iter [76/100] lr_generator: 2.500e-05, eta: 0:00:39, time: 0.895, data_time: 0.003, memory: 3518, loss_pix: 0.0792, loss: 0.0792
2021-07-01 12:08:56,476 - mmedit - INFO - Iter [77/100] lr_generator: 2.500e-05, eta: 0:00:38, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0645, loss: 0.0645
2021-07-01 12:08:57,368 - mmedit - INFO - Iter [78/100] lr_generator: 2.500e-05, eta: 0:00:36, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0475, loss: 0.0475
2021-07-01 12:08:58,261 - mmedit - INFO - Iter [79/100] lr_generator: 2.500e-05, eta: 0:00:34, time: 0.893, data_time: 0.003, memory: 3518, loss_pix: 0.0627, loss: 0.0627
2021-07-01 12:08:59,159 - mmedit - INFO - Iter [80/100] lr_generator: 2.500e-05, eta: 0:00:32, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0626, loss: 0.0626
2021-07-01 12:09:00,055 - mmedit - INFO - Iter [81/100] lr_generator: 2.500e-05, eta: 0:00:30, time: 0.896, data_time: 0.004, memory: 3518, loss_pix: 0.0681, loss: 0.0681
2021-07-01 12:09:00,954 - mmedit - INFO - Iter [82/100] lr_generator: 2.500e-05, eta: 0:00:28, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0671, loss: 0.0671
2021-07-01 12:09:01,860 - mmedit - INFO - Iter [83/100] lr_generator: 2.500e-05, eta: 0:00:27, time: 0.906, data_time: 0.003, memory: 3518, loss_pix: 0.0825, loss: 0.0825
2021-07-01 12:09:02,760 - mmedit - INFO - Iter [84/100] lr_generator: 2.500e-05, eta: 0:00:25, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0594, loss: 0.0594
2021-07-01 12:09:03,658 - mmedit - INFO - Iter [85/100] lr_generator: 2.500e-05, eta: 0:00:23, time: 0.898, data_time: 0.003, memory: 3518, loss_pix: 0.0446, loss: 0.0446
2021-07-01 12:09:04,555 - mmedit - INFO - Iter [86/100] lr_generator: 2.500e-05, eta: 0:00:22, time: 0.897, data_time: 0.003, memory: 3518, loss_pix: 0.0491, loss: 0.0491
2021-07-01 12:09:05,452 - mmedit - INFO - Iter [87/100] lr_generator: 2.500e-05, eta: 0:00:20, time: 0.896, data_time: 0.003, memory: 3518, loss_pix: 0.0450, loss: 0.0450
2021-07-01 12:09:06,351 - mmedit - INFO - Iter [88/100] lr_generator: 2.500e-05, eta: 0:00:18, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0795, loss: 0.0795
2021-07-01 12:09:07,257 - mmedit - INFO - Iter [89/100] lr_generator: 2.500e-05, eta: 0:00:17, time: 0.905, data_time: 0.003, memory: 3518, loss_pix: 0.0522, loss: 0.0522
2021-07-01 12:09:08,161 - mmedit - INFO - Iter [90/100] lr_generator: 2.500e-05, eta: 0:00:15, time: 0.904, data_time: 0.003, memory: 3518, loss_pix: 0.0588, loss: 0.0588
2021-07-01 12:09:09,063 - mmedit - INFO - Iter [91/100] lr_generator: 2.500e-05, eta: 0:00:13, time: 0.902, data_time: 0.003, memory: 3518, loss_pix: 0.0614, loss: 0.0614
2021-07-01 12:09:09,955 - mmedit - INFO - Iter [92/100] lr_generator: 2.500e-05, eta: 0:00:12, time: 0.892, data_time: 0.003, memory: 3518, loss_pix: 0.0599, loss: 0.0599
2021-07-01 12:09:10,849 - mmedit - INFO - Iter [93/100] lr_generator: 2.500e-05, eta: 0:00:10, time: 0.894, data_time: 0.003, memory: 3518, loss_pix: 0.0522, loss: 0.0522
2021-07-01 12:09:11,749 - mmedit - INFO - Iter [94/100] lr_generator: 2.500e-05, eta: 0:00:09, time: 0.900, data_time: 0.003, memory: 3518, loss_pix: 0.0667, loss: 0.0667
2021-07-01 12:09:12,638 - mmedit - INFO - Iter [95/100] lr_generator: 2.500e-05, eta: 0:00:07, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0558, loss: 0.0558
2021-07-01 12:09:13,576 - mmedit - INFO - Iter [96/100] lr_generator: 2.500e-05, eta: 0:00:06, time: 0.938, data_time: 0.052, memory: 3518, loss_pix: 0.0577, loss: 0.0577
2021-07-01 12:09:14,463 - mmedit - INFO - Iter [97/100] lr_generator: 2.500e-05, eta: 0:00:04, time: 0.887, data_time: 0.003, memory: 3518, loss_pix: 0.0574, loss: 0.0574
2021-07-01 12:09:15,351 - mmedit - INFO - Iter [98/100] lr_generator: 2.500e-05, eta: 0:00:02, time: 0.888, data_time: 0.003, memory: 3518, loss_pix: 0.0578, loss: 0.0578
2021-07-01 12:09:16,240 - mmedit - INFO - Iter [99/100] lr_generator: 2.500e-05, eta: 0:00:01, time: 0.889, data_time: 0.003, memory: 3518, loss_pix: 0.0556, loss: 0.0556
[>>] 2/2, 0.1 task/s, elapsed: 34s, ETA: 0s
2021-07-01 12:09:52,294 - mmedit - INFO - Saving checkpoint at 100 iterations
2021-07-01 12:09:52,433 - mmedit - INFO - Iter(val) [100] PSNR: 21.4372, SSIM: 0.5687
|
tensorflow_ranking/examples/handling_sparse_features.ipynb | ###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.A Python script version of this code is available [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_tfrecord.py). The script version supports flags for hyperparameters, and advanced use-cases like [Document Interaction Network](https://research.google/pubs/pub49364).TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the technical paper on [arXiv](https://arxiv.org/abs/1905.08957). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tfrecord) proto to represent the features for context, and each of the examples. We use the protobuffer, **ExampleListWithContext** (ELWC), to store context as a tf.Example proto and the list of examples to be ranked as a list of tf.Example protos.ExampleListWithContext protbuffer is defined [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.protoL72). Let us create some dummy data in ELWC format. We will use this dummy data to show how the proto looks like. Download and install the TensorFlow Ranking and TensorFlow Serving packages.
###Code
!pip install -q tensorflow_ranking tensorflow-serving-api
###Output
_____no_output_____
###Markdown
Let us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import tensorflow as tf
import tensorflow_ranking as tfr
from tensorflow_serving.apis import input_pb2
from google.protobuf import text_format
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
ELWC = input_pb2.ExampleListWithContext()
ELWC.context.CopyFrom(CONTEXT)
for example in EXAMPLES:
example_features = ELWC.examples.add()
example_features.CopyFrom(example)
print(ELWC)
###Output
_____no_output_____
###Markdown
Dependencies and Global Variables Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as ELWC format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
f"metric/ndcg@{topn}": tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in ELWC format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert len(x) == _LIST_SIZE # Note that this includes padding.
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.A Python script version of this code is available [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_tfrecord.py). The script version supports flags for hyperparameters, and advanced use-cases like [Document Interaction Networks](https://arxiv.org/pdf/1910.09676.pdf).TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We use the protobuffer, **ExampleListWithContext** (ELWC), to store context as a tf.Example proto and the list of examples to be ranked as a list of tf.Example protos.ExampleListWithContext protbuffer is defined [here](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.protoL72). Let us create some dummy data in ELWC format. We will use this dummy data to show how the proto looks like. Download and install the TensorFlow 2 package.
###Code
print('Installing TensorFlow 2.1.0. This will take a minute, ignore the warnings.')
!pip install -q tensorflow==2.1.0
import tensorflow as tf
# This is needed for tensorboard compatibility.
!pip uninstall -y grpcio
!pip install -q grpcio>=1.24.3
from google.protobuf import text_format
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
try:
from tensorflow_serving.apis import input_pb2
except ImportError:
!pip install -q tensorflow-serving-api
from tensorflow_serving.apis import input_pb2
ELWC = input_pb2.ExampleListWithContext()
ELWC.context.CopyFrom(CONTEXT)
for example in EXAMPLES:
example_features = ELWC.examples.add()
example_features.CopyFrom(example)
print(ELWC)
###Output
_____no_output_____
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as ELWC format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in ELWC format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert(len(x) == _LIST_SIZE) ## Note that this includes padding.
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We create a new format for ranking data, **Example in Example** (EIE), to store context as a serialized tf.Example proto and the list of examples to be ranked as a list of serialized tf.Example protos. Let us create some dummy data in EIE format. We will use this dummy data to show how the proto looks like.
###Code
from google.protobuf import text_format
import tensorflow as tf
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
print(CONTEXT)
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
print(EXAMPLES)
###Output
[features {
feature {
key: "document_tokens"
value {
bytes_list {
value: "this"
value: "is"
value: "a"
value: "relevant"
value: "answer"
}
}
}
feature {
key: "relevance"
value {
int64_list {
value: 5
}
}
}
}
, features {
feature {
key: "document_tokens"
value {
bytes_list {
value: "irrelevant"
value: "data"
}
}
}
feature {
key: "relevance"
value {
int64_list {
value: 1
}
}
}
}
]
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow as tf
except ImportError:
print('Installing TensorFlow. This will take a minute, ignore the warnings.')
!pip install -q tensorflow
import tensorflow as tf
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
tf.enable_eager_execution()
tf.executing_eagerly()
tf.set_random_seed(1234)
tf.logging.set_verbosity(tf.logging.INFO)
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as EIE format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.EIE,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
example_name = next(six.iterkeys(example_feature_columns()))
input_size = tf.shape(input=features[example_name])[1]
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
input_size=input_size,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.A Python script version of this code is available [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_tfrecord.py). The script version supports flags for hyperparameters, and advanced use-cases like [Document Interaction Networks](https://arxiv.org/pdf/1910.09676.pdf).TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We use the protobuffer, **ExampleListWithContext** (ELWC), to store context as a tf.Example proto and the list of examples to be ranked as a list of tf.Example protos.ExampleListWithContext protbuffer is defined [here](https:https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.protoL72). Let us create some dummy data in ELWC format. We will use this dummy data to show how the proto looks like.
###Code
from google.protobuf import text_format
import tensorflow as tf
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
try:
from tensorflow_serving.apis import input_pb2
except ImportError:
!pip install -q tensorflow-serving-api
from tensorflow_serving.apis import input_pb2
ELWC = input_pb2.ExampleListWithContext()
ELWC.context.CopyFrom(CONTEXT)
for example in EXAMPLES:
example_features = ELWC.examples.add()
example_features.CopyFrom(example)
print(ELWC)
###Output
_____no_output_____
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow as tf
except ImportError:
print('Installing TensorFlow. This will take a minute, ignore the warnings.')
!pip install -q tensorflow
import tensorflow as tf
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
tf.enable_eager_execution()
tf.executing_eagerly()
tf.set_random_seed(1234)
tf.logging.set_verbosity(tf.logging.INFO)
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as ELWC format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in ELWC format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert(len(x) == _LIST_SIZE) ## Note that this includes padding.
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We create a new format for ranking data, **Example in Example** (EIE), to store context as a serialized tf.Example proto and the list of examples to be ranked as a list of serialized tf.Example protos. Let us create some dummy data in EIE format. We will use this dummy data to show how the proto looks like.
###Code
from google.protobuf import text_format
import tensorflow as tf
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
print(CONTEXT)
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
print(EXAMPLES)
###Output
[features {
feature {
key: "document_tokens"
value {
bytes_list {
value: "this"
value: "is"
value: "a"
value: "relevant"
value: "answer"
}
}
}
feature {
key: "relevance"
value {
int64_list {
value: 5
}
}
}
}
, features {
feature {
key: "document_tokens"
value {
bytes_list {
value: "irrelevant"
value: "data"
}
}
}
feature {
key: "relevance"
value {
int64_list {
value: 1
}
}
}
}
]
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow as tf
except ImportError:
print('Installing TensorFlow. This will take a minute, ignore the warnings.')
!pip install -q tensorflow
import tensorflow as tf
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
tf.enable_eager_execution()
tf.executing_eagerly()
tf.set_random_seed(1234)
tf.logging.set_verbosity(tf.logging.INFO)
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as EIE format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.EIE,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in EIE format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.EIE,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert(len(x) == _LIST_SIZE) ## Note that this includes padding.
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.A Python script version of this code is available [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_tfrecord.py). The script version supports flags for hyperparameters, and advanced use-cases like [Document Interaction Networks](https://arxiv.org/pdf/1910.09676.pdf).TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We use the protobuffer, **ExampleListWithContext** (ELWC), to store context as a tf.Example proto and the list of examples to be ranked as a list of tf.Example protos.ExampleListWithContext protbuffer is defined [here](https:https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.protoL72). Let us create some dummy data in ELWC format. We will use this dummy data to show how the proto looks like. Download and install the TensorFlow 2 package.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
try:
import tensorflow as tf
except ImportError:
print('Installing TensorFlow 2.0. This will take a minute, ignore the warnings.')
!pip install -q tensorflow>=2.0.0
import tensorflow as tf
from google.protobuf import text_format
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
try:
from tensorflow_serving.apis import input_pb2
except ImportError:
!pip install -q tensorflow-serving-api
from tensorflow_serving.apis import input_pb2
ELWC = input_pb2.ExampleListWithContext()
ELWC.context.CopyFrom(CONTEXT)
for example in EXAMPLES:
example_features = ELWC.examples.add()
example_features.CopyFrom(example)
print(ELWC)
###Output
_____no_output_____
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as ELWC format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in ELWC format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert(len(x) == _LIST_SIZE) ## Note that this includes padding.
###Output
_____no_output_____
###Markdown
Tutorial: TF-Ranking for sparse features This tutorial is an end-to-end walkthrough of training a TensorFlow Ranking (TF-Ranking) neural network model which incorporates sparse textual features.A Python script version of this code is available [here](https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_tfrecord.py). The script version supports flags for hyperparameters, and advanced use-cases like [Document Interaction Networks](https://arxiv.org/pdf/1910.09676.pdf).TF-Ranking is a library for solving large scale ranking problems using deep learning. TF-Ranking can handle heterogeneous dense and sparse features, and scales up to millions of data points. For more details, please read the technical paper published on [arXiv](https://arxiv.org/abs/1812.00073). Run in Google Colab View source on GitHub MotivationLearning to Rank (LTR) deals with learning to optimally order a list of examples, given some context. For instance, in search applications, examples are documents and context is the query.These models are usually trained using user relevance feedback, which can be explicit (human ratings) or implicit (clicks).This tutorial demonstrates how to build ranking estimators over sparse features, such as textual data. Textual data is prevalent in several settings for ranking, and plays a significant role is relevance judgment by a user.In three different LTR scenarios, the following textual features provide useful signals for ranking:* Search: queries and document titles* Question Answering: questions and answers* Recommendation: titles of items and their descriptionsHence it is important for LTR models to effectively incorporate textual features. Task: Ranking over Question-Answering data ANTIQUE: A Question Answering DatasetFor the purpose of this tutorial, we consider ranking problem over ANTIQUE, a question-answering dataset. Given a query, and a list of answers, the objective it to maximize a rank related metric (say NDCG).[ANTIQUE](http://hamedz.ir/resources/) is a publicly available dataset for open-domain non-factoid question answering, collected over Yahoo! answers.Each question has a list of answers, whose relevance are graded on a scale of 1-5.The list size can vary depending on the query, so we use a fixed "list size" of 50, where the list is either truncated or padded with dummy values.This dataset is a suitable one for learning-to-rank scenario. The dataset is split into 2206 queries for training and 200 queries for testing. For more details, please read the tehcnical paper on [arXiv](https://arxiv.org/pdf/1905.08957.pdf). Download training, test data and vocabulary file.
###Code
!wget -O "/tmp/vocab.txt" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/vocab.txt"
!wget -O "/tmp/train.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking/ELWC/train.tfrecords"
!wget -O "/tmp/test.tfrecords" "http://ciir.cs.umass.edu/downloads/Antique/tf-ranking//ELWC/test.tfrecords"
###Output
_____no_output_____
###Markdown
Next, we discuss data formats in more detail, and show how to generate and store dummy ranking data. Data Formats Data Formats for RankingFor representing ranking data, [protobuffers](https://developers.google.com/protocol-buffers/) are extensible structures suitable for storing data in a serialized format, either locally or in a distributed manner.Ranking usually consists of features corresponding to each of the examples being sorted. In addition, features related to query, user or session are also useful for ranking. We refer to these as context features, as these are independent of the examples.We use the popular [tf.Example](https://www.tensorflow.org/tutorials/load_data/tf_records) proto to represent the features for context, and each of the examples. We use the protobuffer, **ExampleListWithContext** (ELWC), to store context as a tf.Example proto and the list of examples to be ranked as a list of tf.Example protos.ExampleListWithContext protbuffer is defined [here](https:https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/input.protoL72). Let us create some dummy data in ELWC format. We will use this dummy data to show how the proto looks like. Download and install the TensorFlow 2 package.
###Code
print('Installing TensorFlow 2.1.0. This will take a minute, ignore the warnings.')
!pip install -q tensorflow==2.1.0
import tensorflow as tf
# This is needed for tensorboard compatibility.
!pip uninstall -y grpcio
!pip install -q grpcio>=1.24.3
from google.protobuf import text_format
CONTEXT = text_format.Parse(
"""
features {
feature {
key: "query_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "question"] } }
}
}""", tf.train.Example())
EXAMPLES = [
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["this", "is", "a", "relevant", "answer"] } }
}
feature {
key: "relevance"
value { int64_list { value: 5 } }
}
}""", tf.train.Example()),
text_format.Parse(
"""
features {
feature {
key: "document_tokens"
value { bytes_list { value: ["irrelevant", "data"] } }
}
feature {
key: "relevance"
value { int64_list { value: 1 } }
}
}""", tf.train.Example()),
]
try:
from tensorflow_serving.apis import input_pb2
except ImportError:
!pip install -q tensorflow-serving-api
from tensorflow_serving.apis import input_pb2
ELWC = input_pb2.ExampleListWithContext()
ELWC.context.CopyFrom(CONTEXT)
for example in EXAMPLES:
example_features = ELWC.examples.add()
example_features.CopyFrom(example)
print(ELWC)
###Output
_____no_output_____
###Markdown
Dependencies and Global VariablesLet us start by importing libraries that will be used throughout this Notebook. We also enable the "eager execution" mode for convenience and demonstration purposes.
###Code
import six
import os
import numpy as np
try:
import tensorflow_ranking as tfr
except ImportError:
!pip install -q tensorflow_ranking
import tensorflow_ranking as tfr
###Output
_____no_output_____
###Markdown
Here we define the train and test paths, along with model hyperparameters.
###Code
# Store the paths to files containing training and test instances.
_TRAIN_DATA_PATH = "/tmp/train.tfrecords"
_TEST_DATA_PATH = "/tmp/test.tfrecords"
# Store the vocabulary path for query and document tokens.
_VOCAB_PATH = "/tmp/vocab.txt"
# The maximum number of documents per query in the dataset.
# Document lists are padded or truncated to this size.
_LIST_SIZE = 50
# The document relevance label.
_LABEL_FEATURE = "relevance"
# Padding labels are set negative so that the corresponding examples can be
# ignored in loss and metrics.
_PADDING_LABEL = -1
# Learning rate for optimizer.
_LEARNING_RATE = 0.05
# Parameters to the scoring function.
_BATCH_SIZE = 32
_HIDDEN_LAYER_DIMS = ["64", "32", "16"]
_DROPOUT_RATE = 0.8
_GROUP_SIZE = 1 # Pointwise scoring.
# Location of model directory and number of training steps.
_MODEL_DIR = "/tmp/ranking_model_dir"
_NUM_TRAIN_STEPS = 15 * 1000
###Output
_____no_output_____
###Markdown
Components of a Ranking Estimator The overall components of a Ranking Estimator are shown below.The key components of the library are:1. Input Reader2. Tranform Function3. Scoring Function4. Ranking Losses5. Ranking Metrics6. Ranking Head7. Model BuilderThese are described in more details in the following sections. TensorFlow Ranking Architecture Specifying Features via Feature Columns[Feature Columns](https://www.tensorflow.org/guide/feature_columns) are TensorFlow abstractions that are used to capture rich information about each feature. It allows for easy transformations for a diverse range of raw features and for interfacing with Estimators.Consistent with our input formats for ranking, such as ELWC format, we create feature columns for context features and example features.
###Code
_EMBEDDING_DIMENSION = 20
def context_feature_columns():
"""Returns context feature names to column definitions."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="query_tokens",
vocabulary_file=_VOCAB_PATH)
query_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"query_tokens": query_embedding_column}
def example_feature_columns():
"""Returns the example feature columns."""
sparse_column = tf.feature_column.categorical_column_with_vocabulary_file(
key="document_tokens",
vocabulary_file=_VOCAB_PATH)
document_embedding_column = tf.feature_column.embedding_column(
sparse_column, _EMBEDDING_DIMENSION)
return {"document_tokens": document_embedding_column}
###Output
_____no_output_____
###Markdown
Reading Input Data using *input_fn*The input reader reads in data from persistent storage to produce raw dense and sparse tensors of appropriate type for each feature. Example features are represented by 3-D tensors (where dimensions correspond to queries, examples and feature values). Context features are represented by 2-D tensors (where dimensions correspond to queries and feature values).
###Code
def input_fn(path, num_epochs=None):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
label_column = tf.feature_column.numeric_column(
_LABEL_FEATURE, dtype=tf.int64, default_value=_PADDING_LABEL)
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()) + [label_column])
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=num_epochs)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
label = tf.squeeze(features.pop(_LABEL_FEATURE), axis=2)
label = tf.cast(label, tf.float32)
return features, label
###Output
_____no_output_____
###Markdown
Feature Transformations with *transform_fn*The transform function takes in the raw dense or sparse features from the input reader, applies suitable transformations to return dense representations for each feature. This is important before passing these features to a neural network, as neural networks layers usually take dense features as inputs.The transform function handles any custom feature transformations defined by the user. For handling sparse features, like text data, we provide an easy utlity to create shared embeddings, based on the feature columns.
###Code
def make_transform_fn():
def _transform_fn(features, mode):
"""Defines transform_fn."""
context_features, example_features = tfr.feature.encode_listwise_features(
features=features,
context_feature_columns=context_feature_columns(),
example_feature_columns=example_feature_columns(),
mode=mode,
scope="transform_layer")
return context_features, example_features
return _transform_fn
###Output
_____no_output_____
###Markdown
Feature Interactions using *scoring_fn*Next, we turn to the scoring function which is arguably at the heart of a TF Ranking model. The idea is to compute a relevance score for a (set of) query-document pair(s). The TF-Ranking model will use training data to learn this function.Here we formulate a scoring function using a feed forward network. The function takes the features of a single example (i.e., query-document pair) and produces a relevance score.
###Code
def make_score_fn():
"""Returns a scoring function to build `EstimatorSpec`."""
def _score_fn(context_features, group_features, mode, params, config):
"""Defines the network to score a group of documents."""
with tf.compat.v1.name_scope("input_layer"):
context_input = [
tf.compat.v1.layers.flatten(context_features[name])
for name in sorted(context_feature_columns())
]
group_input = [
tf.compat.v1.layers.flatten(group_features[name])
for name in sorted(example_feature_columns())
]
input_layer = tf.concat(context_input + group_input, 1)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
cur_layer = input_layer
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
for i, layer_width in enumerate(int(d) for d in _HIDDEN_LAYER_DIMS):
cur_layer = tf.compat.v1.layers.dense(cur_layer, units=layer_width)
cur_layer = tf.compat.v1.layers.batch_normalization(
cur_layer,
training=is_training,
momentum=0.99)
cur_layer = tf.nn.relu(cur_layer)
cur_layer = tf.compat.v1.layers.dropout(
inputs=cur_layer, rate=_DROPOUT_RATE, training=is_training)
logits = tf.compat.v1.layers.dense(cur_layer, units=_GROUP_SIZE)
return logits
return _score_fn
###Output
_____no_output_____
###Markdown
Losses, Metrics and Ranking Head Evaluation MetricsWe have provided an implementation of several popular Information Retrieval evaluation metrics in the TF Ranking library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/metrics.pyL32). The user can also define a custom evaluation metric, as shown in the description below.
###Code
def eval_metric_fns():
"""Returns a dict from name to metric functions.
This can be customized as follows. Care must be taken when handling padded
lists.
def _auc(labels, predictions, features):
is_label_valid = tf_reshape(tf.greater_equal(labels, 0.), [-1, 1])
clean_labels = tf.boolean_mask(tf.reshape(labels, [-1, 1], is_label_valid)
clean_pred = tf.boolean_maks(tf.reshape(predictions, [-1, 1], is_label_valid)
return tf.metrics.auc(clean_labels, tf.sigmoid(clean_pred), ...)
metric_fns["auc"] = _auc
Returns:
A dict mapping from metric name to a metric function with above signature.
"""
metric_fns = {}
metric_fns.update({
"metric/ndcg@%d" % topn: tfr.metrics.make_ranking_metric_fn(
tfr.metrics.RankingMetricKey.NDCG, topn=topn)
for topn in [1, 3, 5, 10]
})
return metric_fns
###Output
_____no_output_____
###Markdown
Ranking LossesWe provide several popular ranking loss functions as part of the library, which are shown [here](https://github.com/tensorflow/ranking/blob/d8c2e2e64a92923f1448cf5302c92a80bb469a20/tensorflow_ranking/python/losses.pyL35). The user can also define a custom loss function, similar to ones in tfr.losses.
###Code
# Define a loss function. To find a complete list of available
# loss functions or to learn how to add your own custom function
# please refer to the tensorflow_ranking.losses module.
_LOSS = tfr.losses.RankingLossKey.APPROX_NDCG_LOSS
loss_fn = tfr.losses.make_loss_fn(_LOSS)
###Output
_____no_output_____
###Markdown
Ranking HeadIn the Estimator workflow, Head is an abstraction that encapsulates losses and corresponding metrics. Head easily interfaces with the Estimator, needing the user to define a scoring function and specify losses and metric computation.
###Code
optimizer = tf.compat.v1.train.AdagradOptimizer(
learning_rate=_LEARNING_RATE)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([update_ops, minimize_op])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=loss_fn,
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
###Output
_____no_output_____
###Markdown
Putting It All Together in a Model BuilderWe are now ready to put all of the components above together and create an `Estimator` that can be used to train and evaluate a model.
###Code
model_fn = tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
transform_fn=make_transform_fn(),
group_size=_GROUP_SIZE,
ranking_head=ranking_head)
###Output
_____no_output_____
###Markdown
Train and evaluate the ranker
###Code
def train_and_eval_fn():
"""Train and eval function used by `tf.estimator.train_and_evaluate`."""
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=1000)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)
train_input_fn = lambda: input_fn(_TRAIN_DATA_PATH)
eval_input_fn = lambda: input_fn(_TEST_DATA_PATH, num_epochs=1)
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn, max_steps=_NUM_TRAIN_STEPS)
eval_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=eval_input_fn,
throttle_secs=15)
return (ranker, train_spec, eval_spec)
! rm -rf "/tmp/ranking_model_dir" # Clean up the model directory.
ranker, train_spec, eval_spec = train_and_eval_fn()
tf.estimator.train_and_evaluate(ranker, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Launch TensorBoard
###Code
%load_ext tensorboard
%tensorboard --logdir="/tmp/ranking_model_dir" --port 12345
###Output
_____no_output_____
###Markdown
A sample tensorboard output is shown here, with the ranking metrics. Generating Predictions We show how to generate predictions over the features of a dataset. We assume that the label is not present and needs to be inferred using the ranking model.Similar to the `input_fn` used for training and evaluation, `predict_input_fn` reads in data in ELWC format and stored as TFRecords to generate features. We set number of epochs to be 1, so that the generator stops iterating when it reaches the end of the dataset. Also the datapoints are not shuffled while reading, so that the behavior of the `predict()` function is deterministic.
###Code
def predict_input_fn(path):
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns().values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns().values()))
dataset = tfr.data.build_ranking_dataset(
file_pattern=path,
data_format=tfr.data.ELWC,
batch_size=_BATCH_SIZE,
list_size=_LIST_SIZE,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
reader=tf.data.TFRecordDataset,
shuffle=False,
num_epochs=1)
features = tf.compat.v1.data.make_one_shot_iterator(dataset).get_next()
return features
###Output
_____no_output_____
###Markdown
We generate predictions on the test dataset, where we only consider context and example features and predict the labels. The `predict_input_fn` generates predictions on a batch of datapoints. Batching allows us to iterate over large datasets which cannot be loaded in memory.
###Code
predictions = ranker.predict(input_fn=lambda: predict_input_fn("/tmp/test.tfrecords"))
###Output
_____no_output_____
###Markdown
`ranker.predict` returns a generator, which we can iterate over to create predictions, till the generator is exhausted.
###Code
x = next(predictions)
assert(len(x) == _LIST_SIZE) ## Note that this includes padding.
###Output
_____no_output_____ |
support/template/dataset/process-dataset.ipynb | ###Markdown
Process $dataset_name DataJupyter Notebook to download and preprocess files to transform to BioLink RDF. Download filesThe download can be defined:* in this Jupyter Notebook using Python* as a Bash script in the `download/download.sh` file, and executed using `d2s download $dataset_id`
###Code
import os
import glob
import requests
import functools
import shutil
import pandas as pd
# Use Pandas, load file in memory
def convert_tsv_to_csv(tsv_file):
csv_table=pd.read_table(tsv_file,sep='\t')
csv_table.to_csv(tsv_file[:-4] + '.csv',index=False)
# Variables and path for the dataset
dataset_id = '$dataset_id'
dsri_flink_pod_id = 'flink-jobmanager-###'
input_folder = '/notebooks/workspace/input/' + dataset_id
mapping_folder = '/notebooks/datasets/' + dataset_id + '/mapping'
os.makedirs(input_folder, exist_ok=True)
# Use input folder as working folder
os.chdir(input_folder)
files_to_download = [
'https://raw.githubusercontent.com/MaastrichtU-IDS/d2s-scripts-repository/master/resources/cohd-sample/concepts.tsv'
]
# Download each file and uncompress them if needed
# Use Bash because faster and more reliable than Python
for download_url in files_to_download:
os.system('wget -N ' + download_url)
os.system('find . -name "*.tar.gz" -exec tar -xzvf {} \;')
os.system('unzip -o \*.zip')
# Rename .txt to .tsv
listing = glob.glob('*.txt')
for filename in listing:
os.rename(filename, filename[:-4] + '.tsv')
## Convert TSV to CSV to be processed with the RMLStreamer
# use Pandas (load in memory)
convert_tsv_to_csv('concepts.tsv')
# Use Bash
# cmd_convert_csv = """sed -e 's/"/\\"/g' -e 's/\t/","/g' -e 's/^/"/' -e 's/$/"/' -e 's/\r//' concepts.tsv > concepts.csv"""
# os.system(cmd_convert_csv)
###Output
_____no_output_____ |
scripts/papeles - institutions and topics analysis.ipynb | ###Markdown
papeles package - institutions networks per topicsIn this notebook, we'll group institutions networks to particular topics extracted in [this script](https://github.com/glhuilli/papeles/blob/master/scripts/papeles%20-%20keywords%20topics%20analysis.ipynb).
###Code
import os
import json
from collections import Counter, defaultdict
from collections import Counter
import itertools
from tqdm.notebook import tqdm
import networkx as nx
from papeles.paper.neurips import get_key, institutions, institutions_graph
from papeles.utils.topics import TopicPredictor
# These are files with encoding issues that were not parse correctly by the pdf_parser
SKIP_FILES = [
'5049-nonparametric-multi-group-membership-model-for-dynamic-networks.pdf_headers.txt',
'4984-cluster-trees-on-manifolds.pdf_headers.txt',
'5820-alternating-minimization-for-regression-problems-with-vector-valued-outputs.pdf_headers.txt',
'9065-visualizing-and-measuring-the-geometry-of-bert.pdf_headers.txt'
'4130-implicit-encoding-of-prior-probabilities-in-optimal-neural-populations.pdf_headers.txt',
'7118-local-aggregative-games.pdf_headers.txt'
]
NEURIPS_ANALYSIS_DATA_PATH = '/var/data/neurips_analysis'
file_lines = defaultdict(list)
for filename in tqdm(os.listdir(os.path.join(NEURIPS_ANALYSIS_DATA_PATH, 'files_headers/')), 'loading files'):
if filename in SKIP_FILES:
continue
with open(os.path.join(NEURIPS_ANALYSIS_DATA_PATH, './files_headers/', filename), 'r') as f:
for line in f.readlines():
file_lines[get_key(filename)].append(line.strip())
metadata_path = os.path.join(NEURIPS_ANALYSIS_DATA_PATH, 'files_metadata/')
metadata = {}
for filename in tqdm(os.listdir(metadata_path), 'loading metadata'):
with open(os.path.join(metadata_path, filename), 'r') as f: # open in readonly mode
for line in f.readlines():
data = json.loads(line)
metadata[get_key(data['pdf_name'])] = data
###Output
_____no_output_____
###Markdown
Loading and predicting topicsIn this analysis, I'll use only topics of 3-grams. Using 2-grams and 1-gram topics needed further manual post-processing (e.g. removing topics that were mostly about writting styles instead of research topics). For purposes of this analysis, I'll only compute the top 3 most frequent topics mentioned in abstracts per year, from 2009 to 2019. Note that loading the topics generated in a different script, can be used to predict topics in new documents using the `TopicPredictor` object, as presented in the example below. The "top topic" per year is then computed by how frequently these topics were predicted for each paper of that particular year.
###Code
# load topics
with open(os.path.join(NEURIPS_ANALYSIS_DATA_PATH, '3grams_topics.json'), 'r') as f:
topics = json.load(f)
topic_predictor = TopicPredictor(topics)
topics_per_year = {}
year_topic_files = defaultdict(lambda: defaultdict(list))
for key, data in metadata.items():
year = data['year']
if year not in topics_per_year:
topics_per_year[year] = Counter()
topic_prediction = topic_predictor.predict_topics(data['abstract'])
year_topic_files[year][key] = [x[0] for x in sorted(topic_prediction.items(), key=lambda x: x[1], reverse=True) if x[1] > 0]
if sum(topic_prediction.values()) > 0:
top_prediction = [x[0] for x in sorted(topic_prediction.items(), key=lambda x: x[1], reverse=True) if x[1] > 0][:5]
topics_per_year[year].update(top_prediction)
top3_topics_per_year = defaultdict(list)
for year, topic_counter in topics_per_year.items():
top3_topics_per_year[year] = [x[0] for x in sorted(topic_counter.items(), key=lambda x: x[1], reverse=True)][:3]
top_topics = set()
for year, top3_topics in top3_topics_per_year.items():
top_topics.update(top3_topics)
sorted_top_topics = sorted(top_topics, key=lambda x: int(x.split('_')[-1]), reverse=False) # mini hack to sort by topic number
for t in sorted_top_topics:
print(f'------\n{t}: {topics[t]}')
###Output
------
topic_2: ['loss_functions_deep', 'functions_deep_neural', 'optimized_stochastic_gradient', 'modern_deep_networks', 'stochastic_gradient_descent', 'gradient_descent_sgd', 'deep_neural_networks', 'high_dimensional_datasets', 'low_dimensional_structures', 'iterative_algorithm_based']
------
topic_8: ['sequential_monte_carlo', 'partially_observable_markov', 'observable_markov_decision', 'principled_framework_planning', 'provide_principled_framework', 'partially_observable_stochastic', 'markov_decision_processes', 'superposition-structured_dirty_statistical', 'problem_learning_control', 'high_dimensional_datasets']
------
topic_9: ['gaussian_graphical_models', 'paper_address_problem', 'linear_regression_models', 'dirichlet_allocation_lda', 'maximum_posteriori_map', 'address_problem_learning', 'problem_learning_structure', 'posteriori_map_assignment', 'study_problem_finding', 'directed_graphical_models']
------
topic_10: ['probabilistic_graphical_model', 'support_vector_machines', 'inference_graphical_models', 'nonlinear_dynamical_system', 'performs_probabilistic_inference', 'brain_performs_probabilistic', 'belief_propagation_lbp', 'vision_convolutional_neural', 'recurrent_neural_networks', 'natural_language_processing']
------
topic_15: ['simple_computationally_efficient', 'reinforcement_learning_psrl', 'sampling_reinforcement_learning', 'posterior_sampling_reinforcement', 'provably_efficient_learning', 'state_action_spaces', 'reinforcement_learning_algorithm', 'markov_decision_processes', 'superposition-structured_dirty_statistical', 'paper_concerns_problem']
------
topic_18: ['principal_component_analysis', 'machine_learning_problems', 'component_analysis_pca', 'range_machine_learning', 'spiked_covariance_model', 'dominant_singular_vectors', 'singular_vectors_matrix', 'low_dimensional_structures', 'high_dimensional_datasets', 'low-rank_tensor_decomposition']
------
topic_19: ['deep_reinforcement_learning', 'machine_learning_algorithms', 'learning_conditional_random', 'hyperparameters_machine_learning', 'long_standing_pursuit', 'standing_pursuit_machine', 'pursuit_machine_learning', 'tuning_hyperparameters_machine', 'deep_neural_networks', 'learning_structured_predictors']
------
topic_22: ['stochastic_gradient_methods', 'learning_signal_processing', 'machine_learning_signal', 'optimization_problems_machine', 'large-scale_optimization_problems', 'stochastic_gradient_descent', 'problems_machine_learning', 'graph-based_semi-supervised_learning', 'high_dimensional_datasets', 'contextual_bandits_learner']
------
topic_23: ['dirichlet_process_mixture', 'sequential_monte_carlo', 'approximate_inference_algorithms', 'machine_learning_markov', 'approximate_inference_algorithm', 'probabilistic_inference_algorithms', 'approximate_probabilistic_inference', 'contextual_bandits_learner', 'problem_learning_control', 'superposition-structured_dirty_statistical']
------
topic_35: ['convolutional_neural_networks', 'deep_convolutional_neural', 'multiple_kernel_learning', 'statistics_machine_learning', 'neural_networks_trained', 'neural_networks_achieved', 'large_class_loss', 'success_deep_convolutional', 'loss_function_data', 'machine_learning_practice']
------
topic_45: ['machine_learning_models', 'online_learning_algorithm', 'stochastic_convex_optimization', 'optimal_convergence_rates', 'joint_probability_distribution', 'optimization_algorithms_popular', 'maximum_posteriori_map', 'finding_maximum_posteriori', 'max-product_belief_propagation', 'posteriori_map_assignment']
------
topic_52: ['deep_neural_networks', 'convolutional_neural_network', 'neural_networks_trained', 'neural_network_model', 'key_goal_artificial', 'goal_artificial_general', 'artificial_general_intelligence', 'neural_networks_received', 'based_optimization_method', 'stochastic_gradient_descent']
###Markdown
Naming topicsVisualy reviewing the 3-grams lists for each topic, most of them are easy to associate to a particular type of research line (e.g. `Topic 2` pairs well with optimization methods for deep learning), though other topics are a little bit harder (e.g. `Topic 10` has graphical models, SVMs, neural networks, and NLP in it). Given that these topic terms are indeed ranked within each topic, I used the top 5 to decide on a name for the hard cases (e.g. `Topic 10` top 5 terms are most associated to `Probabilistic Graphical Models`, so that's the one I used).
###Code
topic_mapping = {
'topic_2': 'deep learning (optimization)',
'topic_8': 'markov decision processes',
'topic_9': 'probabilistic graphical models',
'topic_10': 'probabilistic graphical models (inference)',
'topic_15': 'reinforcement learning',
'topic_18': 'matrix decomposition',
'topic_19': 'deep reinforcement learning',
'topic_22': 'ML optimization problems (gradients)',
'topic_23': 'bayesian inference algorithms',
'topic_35': 'neural networks',
'topic_45': 'bayesian methods',
'topic_52': 'deep learning (models)'
}
topic_mapping_snake = {
'topic_2': 'deep_learning_optimization',
'topic_8': 'markov_decision processes',
'topic_9': 'probabilistic_graphical_models',
'topic_10': 'probabilistic_graphical_models_inference',
'topic_15': 'reinforcement_learning',
'topic_18': 'matrix_decomposition',
'topic_19': 'deep_reinforcement_learning',
'topic_22': 'ML_optimization_problems_gradients',
'topic_23': 'bayesian_inference_algorithms',
'topic_35': 'neural_networks',
'topic_45': 'bayesian_methods',
'topic_52': 'deep_learning_models'
}
for year, top_topics in sorted(top3_topics_per_year.items(), key=lambda x: x[0]):
print(f'{year} --> {[topic_mapping.get(t) for t in top_topics]}')
###Output
2009 --> ['probabilistic graphical models', 'bayesian methods', 'probabilistic graphical models (inference)']
2010 --> ['probabilistic graphical models', 'reinforcement learning', 'neural networks']
2011 --> ['probabilistic graphical models (inference)', 'neural networks', 'ML optimization problems (gradients)']
2012 --> ['bayesian inference algorithms', 'probabilistic graphical models', 'markov decision processes']
2013 --> ['markov decision processes', 'matrix decomposition', 'reinforcement learning']
2014 --> ['probabilistic graphical models (inference)', 'neural networks', 'matrix decomposition']
2015 --> ['deep learning (optimization)', 'deep learning (models)', 'neural networks']
2016 --> ['deep learning (optimization)', 'deep learning (models)', 'deep reinforcement learning']
2017 --> ['deep learning (optimization)', 'deep learning (models)', 'deep reinforcement learning']
2018 --> ['deep learning (optimization)', 'deep learning (models)', 'deep reinforcement learning']
2019 --> ['deep learning (optimization)', 'deep learning (models)', 'deep reinforcement learning']
###Markdown
Institutions networks per topicIn this section, an institutions network is built using papers associated to the top 3 topics each year. Note that graphs are created as `directed` in this example (unlike the other institutions network example) so that the Hirearchical Edge
###Code
inst_counter = Counter()
for file, lines in list(file_lines.items()):
file_institutions = institutions.get_file_institutions(lines)
unique_file_institutions = list(set(file_institutions))
inst_counter.update(unique_file_institutions)
def subset_data(year, topic, year_topic_files):
topic_files = year_topic_files[year]
keys = []
for key, topics_prediction in topic_files.items():
if topic in topics_prediction:
keys.append(key)
return keys
graphs_per_topic = {}
for year, topics_per_year in tqdm(sorted(top3_topics_per_year.items(), key=lambda x: x[0])):
print(f'------\nyear: {year}\n------')
if year not in graphs_per_topic:
graphs_per_topic[year] = {}
for idx, topic in enumerate(topics_per_year):
keys_filter = subset_data(year, topic, year_topic_files)
graphs_per_topic[year][topic], _ = institutions_graph.build_institutions_graph(file_lines, metadata, inst_counter, freq=5, year=year, keys_filter=keys_filter, directed=True)
print(f'\nTopic {idx+1}: {topic_mapping[topic]}')
print(nx.info(graphs_per_topic[year][topic]))
folder = 'heb_files'
for year, topics_per_year in tqdm(sorted(top3_topics_per_year.items(), key=lambda x: x[0])):
for idx, topic in enumerate(topics_per_year):
file_name = f'{year}-topic_{idx+1}-{topic_mapping_snake[topic]}_graph.json'
institutions_graph.dump_to_d3js_heb(
graphs_per_topic[year][topic], os.path.join(NEURIPS_ANALYSIS_DATA_PATH, folder, file_name))
###Output
_____no_output_____
###Markdown
Network analysis Using this data, there's a wide range of questions that could be answered. For example, which institutions have co-authored papers the most over the top 3 topics per year?
###Code
edges_per_year = defaultdict(set)
for year, topics_per_year in top3_topics_per_year.items():
for topic in topics_per_year:
for edge in graphs_per_topic[year][topic].edges():
if len(set(edge)) > 1:
e = f'{edge[0]}-{edge[1]}'
e_r = f'{edge[1]}-{edge[0]}'
if e_r in edges_per_year:
continue
edges_per_year[e].add(year)
[x for x in sorted(edges_per_year.items(), key=lambda x: len(x[1]), reverse=True) if len(x[1]) > 1]
###Output
_____no_output_____ |
build/_downloads/e3bb4787757a08beb7c577b09c259829/two_layer_net_autograd.ipynb | ###Markdown
PyTorch: 张量和自动梯度-------------------------------一个完全连接的ReLU网络,只有一个隐藏层,没有偏置,最小化欧氏误差训练从x预测y。这个实现使用Pytorch的tensors上的运算操作计算前向传递,并使用PyTorch的autograd计算梯度。一个 PyTorch Tensor 代表了计算图上的一个节点。 如果 ``x`` 是一个状态为``x.requires_grad=True`` 的张量,那么 ``x.grad`` 是另一个张量,它持有``x`` 相对于某个标量的梯度。
###Code
import torch
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") #去掉这行注释就可以在GPU上运行
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# 创建随机张量以持有输入和输出.
# 设置 requires_grad=False 表明 我们在反向传递阶段
# 不需要计算相对于这些张量的梯度
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
# 创建随机张量用来存放模型的可学习参数: weights
# 设置 requires_grad=True 表明 我们在反向传递阶段
# 需要计算相对于这些张量的梯度
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# 前向传递: 计算预测出的 y 使用Tensors相关的运算/操作;
# 这个地方与上一节中使用Tensor的同样的操作计算前向传递是一样的;
# 但是我们不需要保留计算过程的中间值的引用,
# 因为我们并没有去手动实现反向传递。
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# 使用Tensors的操作 计算损失并输出
# 现在损失是一个 shape 为 (1,) 的张量
# loss.item() 可以获得张量loss中持有的数字
loss = (y_pred - y).pow(2).sum()
print(t, loss.item())
# 使用 autograd 去计算反向传递。 这个调用将会计算
# loss相对于所有状态为 requires_grad=True 的张量的梯度。
# 调用完毕以后, w1.grad 和 w2.grad 将会是两个张量,分别持有
# 损失相对于 w1 和 w2 的梯度。
loss.backward()
# 使用梯度下降法手动更新权重。并将代码分装在 torch.no_grad() 中。
# 因为 权重张量的状态为 requires_grad=True, 但是我们不希望在
# autograd 中去跟踪历史.
# 另一种可选的方法是 直接操作 weight.data 和 weight.grad.data 。
# 回想到 tensor.data 给出一个与其共享存储空间的张量,但是不会跟踪历史。
# 你也可以使用 torch.optim.SGD 来达到此目的。
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
# 更新完权重以后,手动将所有的梯度清零
w1.grad.zero_()
w2.grad.zero_()
###Output
_____no_output_____ |
4-add-new-models/adding_new_models.ipynb | ###Markdown
Contributing a model to the Kipoi model repositoryThis notebook will show you how to setup a 'Kipoi model' and upload it to the [Kipoi model repository](https://github.com/kipoi/models). For a simple 'model contribution checklist' see also . Kipoi basicsContributing a model to Kipoi means writing a sub-folder with all the required files to the [Kipoi model repository](https://github.com/kipoi/models) via pull request.Two main components of the model repository are **model** and **dataloader**.  ModelModel takes as input numpy arrays and outputs numpy arrays. In practice, a model needs to implement the `predict_on_batch(x)` method, where `x` is dictionary/list of numpy arrays. The model contributor needs to provide one of the following:- Serialized Keras, Sklearn, TensorFlow or PyTorch model- Custom model inheriting from `keras.model.BaseModel`. - all the required files, i.e. weights need to be loaded in the `__init__` See and for more info. DataloaderDataloader takes raw file paths or other parameters as argument and outputs modelling-ready numpy arrays. Before writing your own dataloader take a look at our [kipoiseq](https://github.com/kipoi/kipoiseq) repository to see whether your use-case is covered by the available dataloaders. Writing your own dataloaderTechnically, dataloading can be done through a generator---batch-by-batch, sample-by-sample---or by just returning the whole dataset. The goal is to work really with raw files (say fasta, bed, vcf, etc in bioinformatics), as this allows to make model predictions on new datasets without going through the burden of running custom pre-processing scripts. The model contributor needs to implement one of the following:- PreloadedDataset- Dataset- BatchDataset- SampleIterator- BatchIterator- SampleGenerator- BatchGeneratorSee for more info. Folder layoutHere is an example folder structure of a Kipoi model:```MyModel├── dataloader.py implements the dataloader (only necessary if you wrote your own dataloader)├── dataloader.yaml describes the dataloader (only necessary if you wrote your own dataloader)└── model.yaml describes the model``` The `model.yaml` and `dataloader.yaml` files a complete description about the model, the dataloader and the files they depend on. Contributing a simple Iris-classifierDetails about the individual files will be revealed throught the tutorial below. A simple Keras model will be trained to predict the Iris plant class from the well-known [Iris](archive.ics.uci.edu/ml/datasets/Iris) dataset. Outline1. Train the model2. Generate the model directory3. Store all data files required for the model and the dataloader in a temporary folder4. Write `model.yaml`5. Write `dataloader.yaml`6. Write `dataloader.py`7. Test with the model with `$ kipoi test .`8. Publish data files on zenodo9. Update `model.yaml` and `dataloader.yaml` to contain the links10. Test again11. Commit, push and generate a pull request 1. Train the model Load and pre-process the data
###Code
import pandas as pd
import os
from sklearn.preprocessing import LabelBinarizer, StandardScaler
from sklearn import datasets
iris = datasets.load_iris()
# view more info about the dataset
# print(iris["DESCR"])
# Data pre-processing
y_transformer = LabelBinarizer().fit(iris["target"])
x_transformer = StandardScaler().fit(iris["data"])
x = x_transformer.transform(iris["data"])
y = y_transformer.transform(iris["target"])
x[:3]
y[:3]
###Output
_____no_output_____
###Markdown
Train an example modelLet's train a simple linear-regression model using Keras.
###Code
from keras.models import Model
import keras.layers as kl
inp = kl.Input(shape=(4, ), name="features")
out = kl.Dense(units=3)(inp)
model = Model(inp, out)
model.compile("adam", "categorical_crossentropy")
model.fit(x, y, verbose=0)
###Output
Using TensorFlow backend.
###Markdown
2. Set the model directory up:In reality, you would also need to 1. Fork the [kipoi/models repository](https://github.com/kipoi/models)2. Clone your repository fork - `$ git clone [email protected]:/models.git`3. Create a new folder `` 3. Store the files in a temporary directoryAll the data of the model will have to be published on zenodo or figshare before the pull request is performed. While setting the Kipoi model up, it is handy the keep the models in a temporary directory in the model folder, which we will delete prior to the pull request.
###Code
# create the model directory
!mkdir models/iris
# create the temporary directory where we will keep the files that should later be published in zenodo or figshare
!mkdir models/iris/tmp
###Output
mkdir: cannot create directory ‘models/iris’: File exists
mkdir: cannot create directory ‘models/iris/tmp’: File exists
###Markdown
Now we can change the current working directory to the model directory:
###Code
import os
os.chdir("models/iris")
###Output
_____no_output_____
###Markdown
3a. Static files for dataloaderSince in our case here we require to write a new dataloader. The dataloader can use some trained transformer instances (here the `LabelBinarizer` and `StandardScaler` transformers form sklearn). These should be uploaded with the model files and then referenced correctly in the `dataloader.yaml` file. We will store the required files in the temporary folder:
###Code
import pickle
with open("tmp/y_transformer.pkl", "wb") as f:
pickle.dump(y_transformer, f, protocol=2)
with open("tmp/x_transformer.pkl", "wb") as f:
pickle.dump(x_transformer, f, protocol=2)
! ls tmp
###Output
example_features.csv model.json weights.h5 y_transformer.pkl
example_targets.csv sklearn_model.pkl x_transformer.pkl
###Markdown
3b. Model definition / weightsNow that we have the static files that are required by the dataloader, we also need to store the model architecture and weights:
###Code
# Architecture
with open("tmp/model.json", "w") as f:
f.write(model.to_json())
# Weights
model.save_weights("tmp/weights.h5")
###Output
_____no_output_____
###Markdown
Alternatively if we would be using a scikit-learn model we would save the pickle file:
###Code
# Alternatively, for the scikit-learn model we would save the pickle file
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
lr = OneVsRestClassifier(LogisticRegression())
lr.fit(x, y)
with open("tmp/sklearn_model.pkl", "wb") as f:
pickle.dump(lr, f, protocol=2)
###Output
_____no_output_____
###Markdown
3c. Example files for the dataloaderEvery Kipoi dataloader has to provide a set of example files so that Kipoi can perform its automated tests and users can have an idea what the dataloader files have to look like. Again we will store the files in the temporary folder:
###Code
# select first 20 rows of the iris dataset
X = pd.DataFrame(iris["data"][:20], columns=iris["feature_names"])
y = pd.DataFrame({"class": iris["target"][:20]})
# store the model input features and targets as csv files with column names:
X.to_csv("tmp/example_features.csv", index=False)
y.to_csv("tmp/example_targets.csv", index=False)
###Output
_____no_output_____
###Markdown
4 Write the model.yamlNow it is time to write the model.yaml in the model directory. Since we are in the testing stage we will be using local file paths in the `args` field - those will be replaced by zenodo links once everything is ready for publication.
###Code
model_yaml = """
defined_as: kipoi.model.KerasModel
args:
arch: tmp/model.json
weights: tmp/weights.h5
default_dataloader: . # path to the dataloader directory. Here it's defined in the same directory
info: # General information about the model
authors:
- name: Your Name
github: your_github_username
email: [email protected]
doc: Model predicting the Iris species
cite_as: https://doi.org:/... # preferably a doi url to the paper
trained_on: Iris species dataset (http://archive.ics.uci.edu/ml/datasets/Iris) # short dataset description
license: MIT # Software License - defaults to MIT
dependencies:
conda: # install via conda
- python=3.5
- h5py
# - soumith::pytorch # specify packages from other channels via <channel>::<package>
pip: # install via pip
- keras>=2.0.4
- tensorflow>=1.0
schema: # Model schema
inputs:
features:
shape: (4,) # array shape of a single sample (omitting the batch dimension)
doc: "Features in cm: sepal length, sepal width, petal length, petal width."
targets:
shape: (3,)
doc: "One-hot encoded array of classes: setosa, versicolor, virginica."
"""
with open("model.yaml", "w") as ofh:
ofh.write(model_yaml)
###Output
_____no_output_____
###Markdown
5 and 6 Write the dataloader.yaml and dataloader.py_**PLEASE REMEMBER:**_Before writing a dataloader yourself please check whether the same functionality can be achieved using a ready-made dataloader in [kipoiseq](https://github.com/kipoi/kipoiseq) and use those as explained in the Kipoi docs.Now it is time to write the `dataloader.yaml`. Since we defined the `default_dataloader` field in `model.yaml` as `.` Kipoi will expect that our `dataloader.yaml` file lies in the same directory. Since we are in the testing stage we will be using local file paths in the `args` field - those will be replaced by zenodo links once everything is ready for publication.
###Code
dataloader_yaml = """
type: Dataset
defined_as: dataloader.MyDataset
args:
features_file:
# descr: > allows multi-line fields
doc: >
Csv file of the Iris Plants Database from
http://archive.ics.uci.edu/ml/datasets/Iris features.
type: str
example: tmp/example_features.csv # example files
x_transformer:
default: tmp/x_transformer.pkl
#default:
# url: https://github.com/kipoi/kipoi/raw/57734d716b8dedaffe460855e7cfe8f37ec2d48d/example/models/sklearn_iris/dataloader_files/x_transformer.pkl
# md5: bc1bf3c61c418b2d07506a7d0521a893
y_transformer:
default: tmp/y_transformer.pkl
targets_file:
doc: >
Csv file of the Iris Plants Database targets.
Not required for making the prediction.
type: str
example: tmp/example_targets.csv
optional: True # if not present, the `targets` field will not be present in the dataloader output
info:
authors:
- name: Your Name
github: your_github_account
email: [email protected]
version: 0.1
doc: Model predicting the Iris species
dependencies:
conda:
- python=3.5
- pandas
- numpy
- sklearn
output_schema:
inputs:
features:
shape: (4,)
doc: "Features in cm: sepal length, sepal width, petal length, petal width."
targets:
shape: (3, )
doc: "One-hot encoded array of classes: setosa, versicolor, virginica."
metadata: # field providing additional information to the samples (not directly required by the model)
example_row_number:
doc: Just an example metadata column
"""
with open("dataloader.yaml", "w") as ofh:
ofh.write(dataloader_yaml)
###Output
_____no_output_____
###Markdown
Since we have referred to the dataloader as `dataloader.MyDataset` we expect a `dataloader.py` file in the same directory as `dataloader.yaml` which has to contain the dataloader class, which is here `MyDataset`.Notice that external static files are arguments to the `__init__` function! Their path was defined in the `dataloader.yaml`.
###Code
import pickle
from kipoi.data import Dataset
import pandas as pd
import numpy as np
def read_pickle(f):
with open(f, "rb") as f:
return pickle.load(f)
class MyDataset(Dataset):
def __init__(self, features_file, targets_file=None, x_transformer=None, y_transformer=None):
self.features_file = features_file
self.targets_file = targets_file
self.y_transformer = read_pickle(y_transformer)
self.x_transformer = read_pickle(x_transformer)
self.features = pd.read_csv(features_file)
if targets_file is not None:
self.targets = pd.read_csv(targets_file)
assert len(self.targets) == len(self.features)
def __len__(self):
return len(self.features)
def __getitem__(self, idx):
x_features = np.ravel(self.x_transformer.transform(self.features.iloc[idx].values[np.newaxis]))
if self.targets_file is None:
y_class = {}
else:
y_class = np.ravel(self.y_transformer.transform(self.targets.iloc[idx].values[np.newaxis]))
return {
"inputs": {
"features": x_features
},
"targets": y_class,
"metadata": {
"example_row_number": idx
}
}
###Output
_____no_output_____
###Markdown
In order to elucidate what the Dataloader class does I will make a few function calls that are usually performed by the Kipoi API in order to generate model input:
###Code
# instantiate the dataloader
ds = MyDataset("tmp/example_features.csv", "tmp/example_targets.csv", "tmp/x_transformer.pkl",
"tmp/y_transformer.pkl")
# call __getitem__
ds[5]
it = ds.batch_iter(batch_size=3, shuffle=False, num_workers=2)
next(it)
###Output
_____no_output_____
###Markdown
I will now store the code from above in a file so that we can test it:
###Code
dataloader_py = """
import pickle
from kipoi.data import Dataset
import pandas as pd
import numpy as np
def read_pickle(f):
with open(f, "rb") as f:
return pickle.load(f)
class MyDataset(Dataset):
def __init__(self, features_file, targets_file=None, x_transformer=None, y_transformer=None):
self.features_file = features_file
self.targets_file = targets_file
self.y_transformer = read_pickle(y_transformer)
self.x_transformer = read_pickle(x_transformer)
self.features = pd.read_csv(features_file)
if targets_file is not None:
self.targets = pd.read_csv(targets_file)
assert len(self.targets) == len(self.features)
def __len__(self):
return len(self.features)
def __getitem__(self, idx):
x_features = np.ravel(self.x_transformer.transform(self.features.iloc[idx].values[np.newaxis]))
if self.targets_file is None:
y_class = {}
else:
y_class = np.ravel(self.y_transformer.transform(self.targets.iloc[idx].values[np.newaxis]))
return {
"inputs": {
"features": x_features
},
"targets": y_class,
"metadata": {
"example_row_number": idx
}
}
"""
with open("dataloader.py", "w") as ofh:
ofh.write(dataloader_py)
###Output
_____no_output_____
###Markdown
7 Test the modelNow it is time to test the model.
###Code
!kipoi test .
###Output
WARNING [kipoi.specs] doc empty for one of the dataloader `args` fields
WARNING [kipoi.specs] doc empty for one of the dataloader `args` fields
INFO [kipoi.data] successfully loaded the dataloader ./. from /home/avsec/workspace/kipoi/examples/4-add-new-model/models/iris/dataloader.MyDataset
Using TensorFlow backend.
INFO [kipoi.model] successfully loaded model architecture from <_io.TextIOWrapper name='tmp/model.json' mode='r' encoding='UTF-8'>
2018-11-13 10:54:43.062166: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO [kipoi.model] successfully loaded model weights from tmp/weights.h5
INFO [kipoi.pipeline] dataloader.output_schema is compatible with model.schema
INFO [kipoi.pipeline] Initialized data generator. Running batches...
INFO [kipoi.pipeline] Returned data schema correct
100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 41.79it/s]
INFO [kipoi.pipeline] predict_example done!
INFO [kipoi.cli.main] Successfully ran test_predict
###Markdown
8. Publish data on zenodo or figshareNow that the model works It is time to upload the data files onto zenodo or figshare. To do so follow the instructions on the website. It might be necessary to remove file suffixes in order to be able to load the respective files. 9 Update `model.yaml` and `dataloader.yaml`Now the local file paths in `model.yaml` and `dataloader.yaml` have to be replaced by the zenodo / figshare URLs in the following way.The entry:```yamlargs: ... x_transformer: default: tmp/x_transformer.pkl```would be replaced by:```yamlargs: ... x_transformer: default: url: https://zenodo.org/path/to/example_files/x_transformer.pkl md5: 76a5sd76asd57```So every local path has to be replaced by the `url` and `md5` combination. Where `md5` is the md5 sum of the file. If you cannot find the the md5 sum on the zenodo / figshare website you can for example run `curl https://zenodo.org/.../x_transformer.pkl | md5sum` to calculate the md5 sum.Now after replacing all the files, test the setup again by running `kipoi test .` and then delete the `tmp` folder. Now the only file(s) remaining in the folder should be `model.yaml` (and in this case also: `dataloader.py` `dataloader.yaml`). 9 Test againNow that you have deleted the temporary files, rerun the test to make sure everything works fine. 10 Commit and pushNow commit the `model.yaml` and if needed (like in this example) also the `dataloader.py` and `datalaoder.yaml` files by running: `git add model.yaml`.Now you can push back to your fork (`git push`) and submit a pull request with `kipoi/models` to request adding your model to the Kipoi models. Accessing local models through kipoi In Kipoi it is not necessary to publish your model. You can leverage the full functionality of Kipoi also for local models. All you have to do is specify `--source dir` when using the CLI or setting `source="dir"` in the python API. The model name is then the local path to the model folder.
###Code
import kipoi
m = kipoi.get_model(".", source="dir") # See also python-sdk.ipynb
m.pipeline.predict({"features_file": "tmp/example_features.csv", "targets_file": "tmp/example_targets.csv" })[:5]
m.info
m.default_dataloader
m.model
m.predict_on_batch
###Output
_____no_output_____ |
site/en/r1/tutorials/keras/overfit_and_underfit.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Explore overfitting and underfitting Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/r1/guide/keras).In both of the previous examples—classifying movie reviews, and predicting fuel efficiency—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing set* (or data they haven't seen before).The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
###Code
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Download the IMDB datasetRather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
###Code
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
###Output
_____no_output_____
###Markdown
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
###Code
plt.plot(train_data[0])
###Output
_____no_output_____
###Markdown
Demonstrate overfittingThe simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or the right size for each layer). You will have to experiment using a series of different architectures.To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.We'll create a simple model using only ```Dense``` layers as a baseline, then create smaller and larger versions, and compare them. Create a baseline model
###Code
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a smaller model Let's create a model with less hidden units to compare against the baseline model that we just created:
###Code
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
###Output
_____no_output_____
###Markdown
And train the model using the same data:
###Code
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Create a bigger modelAs an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
###Code
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
###Output
_____no_output_____
###Markdown
And, again, train the model using the same data:
###Code
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
Plot the training and validation loss The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
###Code
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
###Output
_____no_output_____
###Markdown
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss). Strategies Add weight regularization You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:* [L1 regularization](https://developers.google.com/machine-learning/glossary/L1_regularization), where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).* [L2 regularization](https://developers.google.com/machine-learning/glossary/L2_regularization), where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the squared "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
###Code
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
###Output
_____no_output_____
###Markdown
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value**2``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.Here's the impact of our L2 regularization penalty:
###Code
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
###Output
_____no_output_____
###Markdown
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters. Add dropoutDropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
###Code
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
###Output
_____no_output_____ |
solutions/6_scipp/3_understanding-event-data.ipynb | ###Markdown
Understanding Event Data IntroductionNeutron-scattering data may be recorded in "event mode":For each detected neutron a (pulse) timestamp and a time-of-flight is stored.This notebook will develop an understanding of how do work with this type of data.Our objective is *not* to demonstrate or develop a full reduction workflow.Instead we *develop understanding of data structures and opportunities* that event data provides.This tutorial contains exercises, but solutions are included directly.We encourage you to download this notebook and run through it step by step before looking at the solutions.Event data is a particularly challenging concept so make sure to understand every aspect before moving on.We recommend to use a recent version of *JupyterLab*:The solutions are included as hidden cells and shown only on demand.We use data containing event data from the POWGEN powder diffractometer at SNS.Note that the data has been modified for the purpose of this tutorial and is not entirely in its original state.We begin by loading the file and plot the raw data:
###Code
import scipp as sc
import scippneutron as scn
da = scn.data.tutorial_event_data()
da.plot()
###Output
_____no_output_____
###Markdown
We can see some diffraction lines, but they are oddly blurry.There is also an artifact from the prompt-pulse visible at $4000~\mu s$.This tutorial illustrates how event data gives us the power to understand and deal with the underlying issues.Before we start the investigation we cover some basics of working with event data. Inspecting event dataAs usual, to begin exploring a loaded file, we can inspect the HTML representation of a scipp object shown by Jupyter when typing a variable at the end of a cell (this can also be done using `sc.to_html(da)`, anywhere in a cell):
###Code
da
###Output
_____no_output_____
###Markdown
We can tell that this is binned (event) data from the `dtype` of the data (usually `DataArrayView`) as well as the inline preview, denoting that this is binned data with lists of given lengths.The meaning of these can best be understood using a graphical depiction of `da`, created using `sc.show`:
###Code
sc.show(da)
###Output
_____no_output_____
###Markdown
Each value (yellow cube with dots) is a small table containing event parameters such as pulse time, time-of-flight, and weights (usually 1 for raw data).**Definitions**:1. In scipp we refer to each of these cubes (containing a table of events) as a *bin*. We can think of this as a bin (or bucket) containing a number of records.2. An array of bins (such as the array a yellow cubes with dots, shown above) is referred to as *binned variable*. For example, `da.data` is a binned variable.3. A data array with data given by a binned variable is referred to as *binned data*. Binned data is a precursor to dense or histogrammed data.As we will see below binned data lets us do things that cannot or cannot properly be done with dense data, such as filtering or resampling.Each bin "contains" a small table, essentially a 1-D data array.For efficiency and consistency scipp does not actually store an individual data array for every bin.Instead each bin is a view to a section (slice) of a long table containing all the events from all bins combined.This explains the `dtype=DataArrayView` seen in the HTML representation above.For many practical purposes such a view of a data arrays behaves just like any other data array.The values of the bins can be accessed using the `values` property.For dense data this might give us a `float` value, for binned data we obtain a table.Here we access the 500th event list (counting from zero):
###Code
da.values[500]
###Output
_____no_output_____
###Markdown
ExerciseUse `sc.to_html()`, `sc.show()`, and `sc.table()` to explore and understand `da` as well as individual values of `da` such as `da.values[500]`. From binned data to dense dataWhile we often want to perform many operations on our data in event mode, a basic but important step is transformation of event data into dense data, since typically only the latter is suitable for data analysis software or plotting purposes.There are two options we can use for this transformation, described in the following. Option 1: Summing binsIf the existing binning is sufficient for our purpose we may simply sum over the rows of the tables making up the bin values:
###Code
da_bin_sum = da.bins.sum()
###Output
_____no_output_____
###Markdown
Here we used the special `bins` property of our data array to apply an operation to each of the bins in `da`.Once we have summed the bin values there are no more bins, and the `bins` property is `None`:
###Code
print(da_bin_sum.bins)
###Output
_____no_output_____
###Markdown
We can visualize the result, which dense (histogram) data.Make sure to compare the representations with those obtained above for binned data (`da`):
###Code
sc.to_html(da_bin_sum)
sc.show(da_bin_sum)
###Output
_____no_output_____
###Markdown
We can use `da_bins_sum` to, e.g., plot the total counts per spectrum by summing over the `tof` dimension:
###Code
da_bin_sum.sum('tof').plot(marker='.')
###Output
_____no_output_____
###Markdown
Note:In this case there is just a single time-of-flight bin so we could have used `da_bin_sum['tof', 0]` instead of `da_bin_sum.sum('tof')`. Option 2: HistogrammingFor performance and memory reasons binned data often contains the minimum number of bins that is "necessary" for a given purpose.In this case `da` only contains a single time-of-flight bin (essentially just as information what the lower and upper bounds are in which we can expect events), which is not practical for downstream applications such as data analysis or plotting.Instead of simply summing over all events in a bin we may thus *histogram* data.Note that scipp makes the distinction between binning data (preserving all events individually) and histogramming data (summing all events that fall inside a bin).For simplicity we consider only a single spectrum:
###Code
spec = da['spectrum', 8050]
sc.show(spec)
sc.table(spec.values[0]['event',:5])
###Output
_____no_output_____
###Markdown
Note the chained slicing above:We access the zeroth event list and select the first 5 slices along the `event` dimension (which is the only dimension, since the event list is a 1-D table).We use one of the [scipp functions for creating a new variable](https://scipp.github.io/reference/api.htmlcreation-functions) to define the desired bin edge of our histogram.In this case we use `sc.linspace` (another useful option is `sc.geomspace`):
###Code
tof_edges = sc.linspace(dim='tof', start=18.0, stop=17000, num=100, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
ExerciseChange `tof_edges` to control what is plotted:- Change the number of bins, e.g., to a finer resolution.- Change the start and stop of the edges to plot only a smaller time-of-flight region. Solution
###Code
tof_edges = sc.linspace(dim='tof', start=2000.0, stop=15000, num=200, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
Masking event data — Binning by existing parametersWhile quickly converting binned (event) data into dense (histogrammed) data has its applications, we may typically want to work with binned data as long as possible.We have learned in [Working with masks](1_working-with-masks.ipynb) how to mask dense, histogrammed, data.How can we mask a time-of-flight region, e.g., to mask a prompt-pulse, in *event mode*?Let us sum all spectra and define a dummy data array (named `prompt`) to illustrate the objective:
###Code
spec = da['spectrum', 8050].copy()
# Start and stop are fictitious and this prompt pulse is not actually present in the raw data from SNS
prompt_start = 4000.0 * sc.Unit('us')
prompt_stop = 5000.0 * sc.Unit('us')
prompt_edges = sc.concatenate(prompt_start, prompt_stop, 'tof')
prompt_tof_edges = sc.sort(sc.concatenate(spec.coords['tof'], prompt_edges, 'tof'), 'tof')
prompt = sc.DataArray(data=sc.Variable(dims=['tof'], values=[0,11000,0], unit='counts'),
coords={'tof':prompt_tof_edges})
tof_edges = sc.linspace(dim='tof', start=0.0, stop=17000, num=1701, unit='us')
spec_hist = sc.histogram(da.bins.concatenate('spectrum'), bins=tof_edges)
sc.plot({'spec':spec_hist, 'prompt':prompt})
###Output
_____no_output_____
###Markdown
Masking eventsWe now want to mask out the prompt-pulse, i.e., the peak with exponential falloff inside the region where `prompt` in the figure above is nonzero.We can do so by checking (for every event) whether the time-of-flight is within the region covered by the prompt-pulse.As above, we first consider only a single spectrum.The result can be stored as a new mask:
###Code
spec1 = da['spectrum', 8050].copy() # copy since we do some modifications below
event_tof = spec.bins.coords['tof']
spec1.bins.masks['prompt_pulse'] = (prompt_start <= event_tof) & (event_tof < prompt_stop)
sc.plot({'original': da['spectrum', 8050], 'prompt_mask': spec1})
###Output
_____no_output_____
###Markdown
Here we have used the `bins` property once more.Take note of the following:- We can access coords "inside" the bins using the `coords` dict provided by the `bins` property. This provides access to "columns" of the event tables held by the bins such as `spec.bins.coords['tof']`.- We can do arithmetic (or other) computation with these "columns", in this case comparing with scalar (non-binned) variables.- New "columns" can be added, in this case we add a new mask column via `spec.bins.masks`.**Definitions**:For a data array `da` we refer to- coordinates such as `da.coords['tof']` as *bin coordinate* and- coordinates such as `da.bins.coords['tof']` as *event coordinate*.The table representation (`sc.table`) and `sc.show` illustrate this process of masking:
###Code
sc.table(spec1.values[0]['event',:5])
sc.show(spec1)
###Output
_____no_output_____
###Markdown
We have added a new column to the event table, defining *for every event* whether it is masked or not.The generally recommended solution is different though, since masking individual events has unnecessary overhead and forces masks to be applied when converting to dense data.A better approach is described in the next section. ExerciseTo get familiar with the `bins` property, try the following:- Compute the neutron velocities for all events in `spec1`. Note: The total flight path length can be computed using `scn.Ltotal(spec1, scatter=True)`.- Add the neutron velocity as a new event coordinate.- Use, e.g., `sc.show` to verify that the coordinate has been added as expected.- Use `del` to remove the event coordinate and verify that the coordinate was indeed removed. Solution
###Code
spec1.bins.coords['v'] = scn.Ltotal(spec1, scatter=True) / spec1.bins.coords['tof']
sc.show(spec1)
sc.to_html(spec1.values[0])
del spec1.bins.coords['v']
sc.to_html(spec1.values[0])
###Output
_____no_output_____
###Markdown
Masking binsRather than masking individual events, let us simply "sort" the events depending on whether they fall below, inside, or above the region of the prompt-pulse.We do not actually need to fully sort the events but rather use a *binning* proceedure, using `sc.bin`:
###Code
spec2 = da['spectrum', 8050].copy() # copy since we do some modifications below
spec2 = sc.bin(spec2, edges=[prompt_tof_edges])
prompt_mask = sc.array(dims=spec2.dims, values=[False, True, False])
spec2.masks['prompt_pulse'] = prompt_mask
sc.show(spec2)
###Output
_____no_output_____
###Markdown
Compare this to the graphical representation for `spec1` above and to the figure of the prompt pulse.The start and stop of the prompt pulse are used to cut the total time-of-flight interval into three sections (bins).The center bin is masked:
###Code
spec2.masks['prompt_pulse']
###Output
_____no_output_____
###Markdown
We can also plot the two options of the masked spectrum for comparison.Note how in the second, recommended, option the mask is preserved in the plot, whereas in the first case the histogramming performed by `plot` necessarily has to apply the mask:
###Code
sc.plot({'event-mask':spec1, 'bin-mask (1.1x)':spec2*sc.scalar(1.1)})
###Output
_____no_output_____
###Markdown
Bonus questionWhy did we not use a fine binning, e.g., with 1000 time-of-flight bins and mask a range of bins, similar to how it would be done for histogrammed (non-event) data? Solution - This would add a lot of over overhead from handling many bins. If our instrument had 1.000.000 pixels we would have 1.000.000.000 bins, which comes with significant memory overhead but first and foremost compute overhead. Binning by new parametersAfter having understood how to mask a prompt-pulse we continue by considering the proton-charge log:
###Code
proton_charge = da.attrs['proton_charge'].value
proton_charge.plot(marker='.')
###Output
_____no_output_____
###Markdown
To mask a time-of-flight range, we have used `sc.bin` to adapt the binning along the *existing* `tof` dimension.`sc.bin` can also be used to introduce binning along *new* dimension.We define our desired pulse-time edges:
###Code
tmin = proton_charge.coords['time'].min()
tmax = proton_charge.coords['time'].max()
pulse_time = sc.arange(dim='pulse_time', start=tmin.value, stop=tmax.value, step=(tmax.value - tmin.value) / 10)
pulse_time
###Output
_____no_output_____
###Markdown
As above we work with a single spectrum for now and then use `sc.bin`.The result has two dimensions, `tof` and `pulse_time`:
###Code
spec = da['spectrum', 8050]
binned_spec = sc.bin(spec, edges=[pulse_time])
binned_spec
###Output
_____no_output_____
###Markdown
We can plot the binned spectrum, resulting in a 2-D plot:
###Code
binned_spec.plot()
###Output
_____no_output_____
###Markdown
In this case the plot is not very readable since there are so few events in the spectrum that we resolve individual events as tiny dots.Note that this is independent of the bin sizes since `plot()` resamples dynamically and can thus resolve events within bins.We can use the `resolution` option to obtain a more useful plot:
###Code
binned_spec.plot(resolution={'x':100, 'y':20})
###Output
_____no_output_____
###Markdown
We may also ignore the `tof` dimension if we are simply interested in the time-evolution of the counts in this spectrum.We can do so by concatenating all bins along the `tof` dimension as follows:
###Code
binned_spec.bins.concatenate('tof').plot()
###Output
_____no_output_____
###Markdown
ExerciseUsing the same approach as for masking a time-of-flight bin in the previous section, mask the time period starting at about 16:30 where the proton charge is very low.- Define appropriate edges for pulse time (use as few bins as possible, not the 10 pulse-time bins from the binning example above).- Use `sc.bin` to apply the new binning. Make sure to combine this with the time-of-flight binning to mask the prompt pulse.- Set an appropriate bin mask.- Plot the result to confirm that the mask is set and defined as expected.Note:In practice masking bad pulses would usually be done on a pulse-by-pulse basis.This requires a slightly more complex approach and is beyond the scope of this introduction.Hint:Pulse time is stored as `datetime64`.A simple way to create these is using an offset from a know start time such as `tmin`:
###Code
tmin + sc.to_unit(sc.scalar(7, unit='min'), 'ns')
###Output
_____no_output_____
###Markdown
Solution
###Code
pulse_time_edges = tmin + sc.to_unit(
sc.array(dims=['pulse_time'],
values=[0, 43, 55, 92], unit='min'), 'ns')
# Alternative solution to creating edges:
# t1 = tmin + sc.to_unit(43 * sc.Unit('min'), 'ns')
# t2 = tmin + sc.to_unit(55 * sc.Unit('min'), 'ns')
# pulse_time_edges = sc.array(dims=['pulse_time'], unit='ns', values=[tmin.value, t1.value, t2.value, tmax.value])
pulse_time_mask = sc.array(dims=['pulse_time'], values=[False, True, False])
binned_spec = sc.bin(spec, edges=[prompt_tof_edges, pulse_time_edges])
binned_spec.masks['prompt_pulse'] = prompt_mask
binned_spec.masks['bad_beam'] = pulse_time_mask
binned_spec.plot(resolution={'x':100, 'y':20})
sc.show(binned_spec)
###Output
_____no_output_____
###Markdown
Higher dimensions and cutsFor purposes of plotting, fitting, or data analysis in general we will typically need to convert binned data to dense data.We discussed the basic options for this in [From binned data to dense data](From-binned-data-to-dense-data).In particular when dealing with higher-dimensional data these options may not be sufficient.For example we may want to:- Create a 1-D or 2-D cut through a 3-D volume.- Create a 2-D cut but integrate over an interval in the remaining dimension.- Create multi-dimensional cuts that are not aligned with existing binning.All of the above can be achieved using tools we have already used, but not all of them are covered in this tutorial. ExerciseAdapt the above code used for binning and masking the *single spectrum* (`spec`) along `pulse_time` and `tof` to the *full data array* (`da`).Hint: This is trivial. Solution
###Code
binned_da = sc.bin(da, edges=[prompt_tof_edges, pulse_time_edges])
binned_da.masks['prompt_pulse'] = prompt_mask
binned_da.masks['bad_beam'] = pulse_time_mask
binned_da.transpose().plot()
###Output
_____no_output_____
###Markdown
Removing binned dimensionsLet us now convert our data to $d$-spacing (interplanar lattice spacing).This works just like for dense data:
###Code
import scippneutron as scn
da_dspacing = scn.convert(binned_da, 'tof', 'dspacing', scatter=True)
# `dspacing` is now a multi-dimensional coordinate, which makes plotting inconvenient, so we adapt the binning
dspacing = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=3.0, num=4)
da_dspacing = sc.bin(da_dspacing, edges=[dspacing])
da_dspacing
da_dspacing.transpose().plot()
###Output
_____no_output_____
###Markdown
After conversion to $d$-spacing we may want to combine data from all spectra.For dense data we would have used `da_dspacing.sum('spectrum')`.For binned data this is not possible (since the events list in every spectrum have different lengths).Instead we need to *concatenate* the lists from bins across spectra:
###Code
da_dspacing_total = da_dspacing.bins.concatenate('spectrum')
da_dspacing_total.plot()
###Output
_____no_output_____
###Markdown
If we zoom in we can now understand the reason for the blurry diffraction lines observed at the very start of this tutorial:The lines are not horizontal, i.e., $d$-spacing appears to depend on the pulse time.Note that the effect depicted here was added artifically for the purpose of this tutorial and is likely much larger than what could be observed in practice from changes in sample environment parameters such as (pressure or temperature).Our data has three pulse-time bins (setup earlier for masking an area with low proton charge).We can thus use slicing to compare the diffraction pattern at different times (used as a stand-in for a changing sample-environment parameter):
###Code
tmp = da_dspacing_total
lines = {}
lines['total'] = tmp.bins.concatenate('pulse_time')
for i in 0,2:
lines[f'interval{i}'] = tmp['pulse_time', i]
sc.plot(lines, resolution=1000, norm='log')
###Output
_____no_output_____
###Markdown
How can we extract thinner `pulse_time` slices?We can use `sc.bin` with finer pulse-time binning, such that individual slices are thinner.Instead of manually setting up a `dict` of slices we can use `sc.collapse`:
###Code
pulse_time = sc.arange(dim='pulse_time', start=tmin.value, stop=tmax.value, step=(tmax.value - tmin.value) / 10)
split = sc.bin(da_dspacing_total, edges=[pulse_time])
sc.plot(sc.collapse(split, keep='dspacing'), resolution=5000)
###Output
_____no_output_____
###Markdown
Making a 1-D cutInstead of summing over all spectra we may want to group spectra based on a $2\theta$ interval they fall into.We first compute $2\theta$ for every spectrum and store it as a new coordinate:
###Code
da_dspacing.coords['two_theta'] = scn.two_theta(da_dspacing)
###Output
_____no_output_____
###Markdown
We can then define the boundaries we want to use for our "cut".Here we use just a single bin in each of the three dimensions:
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=2)
# Do not use many bins, fewer is better for performance
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(sc.array(dims=['pulse_time'], unit='s', values=[0,10*60]), 'ns')
cut = sc.bin(da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum'])
cut
###Output
_____no_output_____
###Markdown
We can then use slicing (to remove unwanted dimensions) and `sc.histogram` to get the desired binning:
###Code
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
cut = cut['two_theta', 0] # squeeze two_theta (dim of length 1)
dspacing_edges = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____
###Markdown
Exercise- Adjust the start and stop values in the cut edges above to adjust the "thickness" of the cut.- Adjust the edges used for histogramming. Making a 2-D cut Exercise- Adapt the code of the 1-D cut to create 100 `two_theta` bins.- Make a 2-D plot (with `dspacing` and `two_theta` on the axes). Solution
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=101)
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(sc.array(dims=['pulse_time'], unit='s', values=[0,10*60]), 'ns')
cut = sc.bin(da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum'])
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
dspacing_edges = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____ |
PythonBootCamp.ipynb | ###Markdown
**range** Data Type It represents a sequence of numbers.The range data type can be represented in three forms:1. Form-1: range(n) It generates numbers from 0 to (n-1).
###Code
#Example:
numbers = range(5)
for num in numbers:
print(num)
###Output
0
1
2
3
4
###Markdown
2. Form-2: range(m, n) It generates numbers from m to (n-1)
###Code
#form example
numbers = range(100, 200)
for num in numbers:
print(num)
###Output
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
###Markdown
3. Form-3: range(m, n, inc) It generates numbers from m to (n-1) incrementing by inc.
###Code
#example:
numbers = range(100, 200, 5)# 100, 105, 110, 115................195
for num in numbers:
print(num)
###Output
100
105
110
115
120
125
130
135
140
145
150
155
160
165
170
175
180
185
190
195
###Markdown
**NOte:** The elements present in range Data type are not modifiable, which means it is immutable. Till now we have only generated the range data type now we will try to access the elements present in the range Data type but how????? We can access the elements in a range data type by using the index.
###Code
#example:
numbers = range(1, 5)
print(numbers)#1,2,3,4,5
print(numbers[0])
print(numbers[3])
numbers[100]
###Output
_____no_output_____
###Markdown
We cannot access the values which are out of range .
###Code
#updating the range data type
numbers[0]
numbers[0] = 101# assignment is not supported for range data type ie it is immutable
###Output
_____no_output_____
###Markdown
We can use range data type for creating lists of values
###Code
numbers = range(10)
for num in numbers:
print(num)
num_list = list(range(10))
print(num_list)
###Output
0
1
2
3
4
5
6
7
8
9
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
** set Data Type:**---------------------- Data type which represents a group of values **without duplicates** where orderis not important is known as **set** Data type. A data type is called as set type if 1. insertion order is not preserved2. duplicates are not allowed.3. heterogeneous objects are allowed.4. It is mutable collection, which means we can change its values.5. Growable in nature.6. set doesn't support indexing.
###Code
#example
#creation of set data type
set1 = {99, 3+7j,78, 56.9, 'ten'} # set is created by using {} curly braces.
#trying to access set1 using its index
set1[3]
set1[2]
set1[4]
###Output
_____no_output_____
###Markdown
set is growable in nature, hence we can increase or decrease the size of setusing the fuctions1. add()2. remove()
###Code
#example:
###Output
_____no_output_____
###Markdown
**frozenset Data Type**-------------------------- It is exactly same as set data type except that it is immutable, ie we cannot use add() or remove() function over frozenset data type.
###Code
#example
set2 = {100, 45, 'ten', 2.3, 10101}
set2.add(234)
set2
fixed_set2 = frozenset(set2)
fixed_set2.remove(234)
fixed_set2.add(10)
###Output
_____no_output_____
###Markdown
** dict Data type**-------------------- A data type which is used to represent a group of values as key-value pairsis known as **dict** data type.
###Code
#example:
stu_roll = { 101:'oti', 102:'mark', 103:'kery'}
stu_roll
###Output
_____no_output_____
###Markdown
We can create an empty dictionary as follows:
###Code
emp = {}
###Output
_____no_output_____
###Markdown
we can also add key-value pairs :
###Code
emp[100] = 'oti'
emp
emp[200] = 'mark'
emp
emp
emp[300] = 'kery'
emp
###Output
_____no_output_____
###Markdown
**Note** --------- dict is mutable, which means we can change the values of key,
###Code
emp[100]='asha'
emp
emp['asha'] = 100
emp
###Output
_____no_output_____
###Markdown
**Note**--------- ** use of different data types **---------------------------------- 1. To represent binary information like images, video files etc, we use bytes & bytesarray data types 2. In python3 we can repesent long value also using int type only. 3. In python char data type is not available. We represent char values by str data type only. ** None Data Type**--------------------- None means Nothing or No values associated.In some cases if values are not available then to handle such situation None type is used.
###Code
#example:
def run():
name = 'oti'
print(run())
###Output
None
###Markdown
**Constants**------------- --> The concepts of 'Contstant' is not there in python. * But to consider something as constent we use a standard convention ie. when we don't want to change value then we use only use Uppercase characters
###Code
#To show that we don't want to change a particular value we use only uppercase character
FIXED_PRICE = 45000
print(FIXED_PRICE)
#Here we can change the values
FIXED_PRICE = 50000
print(FIXED_PRICE)
###Output
_____no_output_____ |
notebook/2018-03-13_testis1_force_analysis.ipynb | ###Markdown
Testis Replicate 1 scRNA-Seq This is the seurat analysis for testis replicate 1 after force calling 3k cells with cell ranger.
###Code
options(repr.plot.width=10, repr.plot.height=10)
DATA_DIR <- '../output/testis1_force/outs/filtered_gene_bc_matrices/dm6.16'
OUTDIR <- '../output/testis1_scRNAseq_3k'
REFERENCES_DIR <- Sys.getenv('REFERENCES_DIR')
NAME <- 'TestisForce3k'
# Get list of mitochondiral genes
fbgn2chrom <- read.table('../output/fbgn2chrom.tsv', header=T)
fbgn2symbol <- read.csv(file.path(REFERENCES_DIR, 'dmel/r6-16/fb_annotation/dmel_r6-16.fb_annotation'), header=T, sep = '\t')[, c('gene_symbol', 'primary_FBgn')]
mito <- fbgn2chrom[fbgn2chrom$chrom == 'chrM', 'FBgn']
source('../lib/seurat.R')
library(Seurat)
library(dplyr)
library(Matrix)
# Load the 10x dataset
tenX.data <- Read10X(data.dir=DATA_DIR)
# Initialize the Seurat object with the raw (non-normalized data).
# Keep all genes expressed in >= 3 cells (~0.1% of the data). Keep all cells with at least 200 detected genes
sobj <- CreateSeuratObject(raw.data = tenX.data, min.cells = 3, min.genes = 200, project = NAME)
nCells <- dim([email protected])[1]
# calculate the percent genes on chrom M
mask <- row.names([email protected]) %in% mito
percent.mito <- Matrix::colSums([email protected][mask, ]/Matrix::colSums([email protected])) * 100
sobj <- AddMetaData(object = sobj, metadata = percent.mito, col.name = "percent_mito")
VlnPlot(object = sobj, features.plot = c('nGene', 'nUMI', 'percent_mito'), nCol = 2)
par(mfrow = c(2, 1))
GenePlot(object = sobj, gene1 = 'nUMI', gene2 = 'percent_mito')
GenePlot(object = sobj, gene1 = 'nUMI', gene2 = 'nGene')
# Filter Genes based on low and high gene counts
sobj <- FilterCells(object=sobj, subset.names=c("nGene"), low.thresholds=c(200), high.thresholds=c(6000))
sobj <- NormalizeData(object = sobj, normalization.method = "LogNormalize", scale.factor = 1e4)
sobj <- FindVariableGenes(object = sobj, mean.function = ExpMean, dispersion.function = LogVMR,
x.low.cutoff = 0.01,
x.high.cutoff = 2.8,
y.cutoff = 0.5,
y.high.cutoff = Inf
)
length(x = [email protected])
sobj <- ScaleData(object = sobj, vars.to.regress = c("nUMI"), display.progress = F)
### Perform linear dimensional reduction
sobj <- RunPCA(object = sobj, pc.genes = [email protected], do.print = FALSE, pcs.print = 1:5, genes.print = 5, pcs.compute = 100)
PrintPCA(object = sobj, pcs.print = 1:5, genes.print = 5, use.full = FALSE)
VizPCA(object = sobj, pcs.use = 1:3)
PCAPlot(object = sobj, dim.1 = 1, dim.2 = 2)
# ProjectPCA scores each gene in the dataset (including genes not included in the PCA) based on their correlation
# with the calculated components. Though we don't use this further here, it can be used to identify markers that
# are strongly correlated with cellular heterogeneity, but may not have passed through variable gene selection.
# The results of the projected PCA can be explored by setting use.full=T in the functions above
sobj <- ProjectPCA(object = sobj, do.print = F)
PCElbowPlot(object = sobj, num.pc=30)
sobj <- JackStraw(object = sobj, num.replicate = 100, do.print = FALSE, num.pc = 30)
JackStrawPlot(object = sobj, nCol = 6, PCs = 1:30)
sobj <- FindClusters(
object = sobj,
reduction.type = "pca",
dims.use = 1:30,
resolution = c(0.4, 0.6, 1.0, 1.2, 1.4),
print.output = 0,
save.SNN = TRUE,
)
PrintFindClustersParams(object = sobj)
### Run Non-linear dimensional reduction (tSNE)
sobj <- RunTSNE(object = sobj, dims.use = 1:30, do.fast = TRUE)
TSNEPlot(object = sobj, group.by='res.1.2')
dump_seurat(object = sobj, dir = OUTDIR)
# Save cluster info
params <- c(0.4, 0.6, 1.0, 1.2, 1.4)
for (i in params) {
name <- paste0('res.', i)
fname <- paste0('biomarkers_', i, '.tsv')
sobj <- SetAllIdent(sobj, id = name)
markers <- FindAllMarkers(object = sobj, only.pos = TRUE, min.pct = 0.25, thresh.use = 0.25, print.bar = FALSE)
markers = merge(fbgn2symbol, markers, by.x='primary_FBgn', by.y='gene', all.y=T)
save_biomarkers(markers = markers, dir = OUTDIR, fname = fname)
}
###Output
_____no_output_____ |
examples/ITK_Example10_PointSetAndMaskTransformation.ipynb | ###Markdown
10. Point set and Mask Transformation Transformix can be used to transform point sets and mask images aswell. Masks can be seen as images so the registration of masks is done in similar fashion as the registration of a moving image.Point sets however need not be transformed with the backwards mapping protocol, but can instead be transformed with regular forward transformation (fixed -> moving)Transforming point sets can be used to get the regions of interest (ROI) in the moving image, based on ROI of the fixed image. Elastix
###Code
# First two import are currently necessary to run ITKElastix on MacOs
from itk import itkElastixRegistrationMethodPython
from itk import itkTransformixFilterPython
import itk
# Import Images
fixed_image = itk.imread('data/CT_3D_lung_fixed.mha', itk.F)
moving_image = itk.imread('data/CT_3D_lung_moving.mha', itk.F)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
parameter_map_rigid = parameter_object.GetDefaultParameterMap('rigid')
parameter_object.AddParameterMap(parameter_map_rigid)
###Output
_____no_output_____
###Markdown
Registration with the registration function...
###Code
# Call registration function
result_image, result_transform_parameters = itk.elastix_registration_method(
fixed_image, moving_image,
parameter_object=parameter_object)
###Output
_____no_output_____
###Markdown
Point Set Transformation Transformation can either be done in one line with the transformix function...
###Code
# --- Error: procedural interface of transformix filter not working in 3D yet ---
# result_point_set = itk.transformix_filter(
# moving_image=moving_image,
# fixed_point_set_file_name='data/CT_3D_lung_fixed_point_set.txt',
# transform_parameter_object=result_transform_parameters,
# output_directory = '/exampleoutput')
###Output
_____no_output_____
###Markdown
.. or by initiating an transformix image filter object.
###Code
# Load Transformix Object
# Explicit call to 3D Transformix method is currently necessary
# for 3D (point set) transformations, this might change in future versions.
transformix_object = itk.itkTransformixFilterPython.itkTransformixFilterIF3.New()
transformix_object.SetFixedPointSetFileName('data/CT_3D_lung_fixed_point_set.txt')
transformix_object.SetTransformParameterObject(result_transform_parameters)
transformix_object.SetLogToConsole(True)
transformix_object.SetOutputDirectory('exampleoutput/')
# Update object (required)
transformix_object.UpdateLargestPossibleRegion()
# Results of Transformation
# -- Bug? -- Output is saved as .txt file in outputdirectory.
# The .GetOutput() function outputs an empty image.
output_transformix = transformix_object.GetOutput()
###Output
_____no_output_____
###Markdown
Mask Transformation
###Code
# Import Mask
moving_mask = itk.imread('data/CT_3D_lung_moving_mask.mha', itk.F)
###Output
_____no_output_____
###Markdown
Transformation of a mask is similar to the transformation of an image and can either be done in one line with the transformix function...
###Code
# --- Error: procedural interface of transformix filter not working in 3D yet ---
# result_mask = itk.transformix_filter(
# moving_image=moving_mask,
# transform_parameter_object=result_transform_parameters)
###Output
_____no_output_____
###Markdown
.. or by initiating an transformix image filter object.
###Code
# Load Transformix Object
# Explicit call to 3D Transformix method is currently necessary
# for 3D (point set) transformations, this might change in future versions.
transformix_object = itk.itkTransformixFilterPython.itkTransformixFilterIF3.New()
transformix_object.SetMovingImage(moving_mask)
transformix_object.SetTransformParameterObject(result_transform_parameters)
# Update object (required)
transformix_object.UpdateLargestPossibleRegion()
# Results of Transformation
result_moving_mask = transformix_object.GetOutput()
###Output
_____no_output_____ |
2_minimal_example_optimization_task.ipynb | ###Markdown
Minimal Example Numerical Unit Commitment OptimizationThe energy system modeled in this notebook consists of three units used to cover an electric load. Only one timestep (i.e. load) isconsidered. The units are described by a minimal power, a maximum power and linear production cost. Tasks Complete the code by taking the following steps: 1. Analyze and try to understand what happens in the code now. To do this execute single cells one by one. 2. Define binary variables (integer with boundaries [0,1] to model the operating state. 3. Add the constant part of the fuel cost to the objective function so that the cost is modeled as C = a + b * P. The constant costs for the three units are: 7.9 / 7.85 / 9.56 EUR/h. 4. Define the restriction of minimal power (P > Pmin * z) 5. Compare with the solution file (3_minimal_example_optimization_solution.ipynb)
###Code
# Import of the library PuLP
from pulp import LpProblem, LpMinimize, LpVariable, LpInteger, LpStatus, value
# Problem definition
prob = LpProblem("Unit Commitment Sandbox Problem",LpMinimize)
# Definition of decision variables and their bounds
P1=LpVariable("P1", 0, 100)
P2=LpVariable("P2", 0, 400)
P3=LpVariable("P3", 0, 1000)
''' Todo Task2:
To model the minimum power condition you need binary variables:
If the block is offline (z=0) the power must be zero, if it is
online (z=1) it cant be below minimum power limit.
Definition of Integer variables in Pulp:
z1 = LpVariable("z1", 0, 1, LpInteger)
'''
# Defintion of objective function that is to be minimized (cost function)
prob += 60*P1 + 25*P2 + 35*P3, "Linear Production Cost"
# Constraint that the sum of production must be the load (here 500 MW)
prob += P1 + P2 + P3 == 500, "Load constraint"
# Print what we have defined
print(prob)
# Solve the problem
prob.solve()
print("Solution status:", LpStatus[prob.status], "\n")
# Go through the variables and print name and solution value
for v in prob.variables():
print(v.name, '=', v.varValue)
print(f'\nOBJECTIVE VALUE\n{prob.objective.value():.0f} EUR')
# Use this cell to try out stuff!
###Output
_____no_output_____ |
Kinect_project/Kate library/Data parsing and feature creation.ipynb | ###Markdown
How to use this notebook ?* In the last cell, enter the xml files of data you wish to use in the game_files list.* Select an agregation function (e.g. agregate_feature_vectors) for the line "vect = agregation_function(game_files)"* Export the now .csv data with the line "export_feature_vectors(vect, "name_of_the_file.csv")" First steps Imports & Imports of data
###Code
import xml.etree.ElementTree as et
import numpy as np
def norm(vect):
sum = 0
for el in vect:
sum += el**2
return np.sqrt(sum)
###Output
_____no_output_____
###Markdown
Useful functions for extracting data from parsed xml file * The function `read_time` returns the time in second in float format from the parsed timestamp
###Code
def read_time(timestamp):
index1 = timestamp.find('T')
index2 = timestamp.find('+')
return float(timestamp[index1+4:index1+6]) * 60 + float(timestamp[index1+7:index2])
###Output
_____no_output_____
###Markdown
* The function `parse_root` returns an array containing all the parsed data from the file named 'game_file'
###Code
def parse_root(game_file):
root = et.parse(game_file).getroot()
return root
###Output
_____no_output_____
###Markdown
* The function `hand_positions` extracts the positions of the right hand along with the time corresponding to those positions. It returns an array of shape [(x, y, t)] (length number_of_position, with 3 elements arrays representing (x, y, t)).
###Code
def hand_positions(game_file):
hand_positions = parse_root(game_file)[1]
array = []
for vector2 in hand_positions:
#print(vector2[0][0].text)
x = float(vector2[0][0].text)
y = float(vector2[0][1].text)
t = read_time(vector2[1].text)
array.append((x, y, t))
return array
root = parse_root('C:/Users/menoci/Desktop/Studies/autisme et ML/Kinect Project/Code+Data/xml_data/julien_main_droite_1.xml')
print(root[2][1].text)
if root[2][1].text == 'false':
print('Oui')
###Output
Julien
###Markdown
* The function `bubble_pop` extracts the time of each game event corresponding to the pop of a bubble by the player. It returns an array of shape [t] (length number_of_bubble_poped).
###Code
def bubble_pop(game_file):
bubble_logs = parse_root(game_file)[0]
return_array = []
for event in bubble_logs :
if event[0].text == "gather" :
t = read_time(event[2].text)
return_array.append(t)
return return_array
###Output
_____no_output_____
###Markdown
* This last function `bubble_pop_clean` returns the time of bubble gathering, minus the last wave if it misses some of the data
###Code
def bubble_pop_clean(game_file):
bubble_pop_time = bubble_pop(game_file)
i = len(bubble_pop_time)%5
if i > 0:
return bubble_pop_time[:-(5-i)]
else:
return bubble_pop_time
###Output
_____no_output_____
###Markdown
Extraction of sub-trajectories & featuresThe function `sub_trajectories` returns an array of shape [[*[(x,y,t),(x,y,t),...]*, for each bubble in wave], for each wave]. To access all positions and time of the trajectory between the *i* and *i+1* bubble of the *n* wave : *sub_trajectories[n-1][i]*.
###Code
def sub_trajectories(game_file):
hand_position = hand_positions(game_file)
bubble_pop_time = bubble_pop_clean(game_file)
th = hand_position[0][2]
sub_traj=[]
nb_waves = len(bubble_pop_time)//5
i=0 #loop count for waves
k=0 #loop count for hand positions
while i<nb_waves :
sub_traj.append([])
j=0 #loop count for bubbles
while j<5:
sub_traj[i].append([])
t = bubble_pop_time[j+5*i] #the time the bubble was gathered
while th < t:
sub_traj[i][j].append(hand_position[k]) #appends the position of the hand and the corresponding time
k+=1
th = hand_position[k][2]
j+=1
i+=1
return np.array(sub_traj)
###Output
_____no_output_____
###Markdown
We define some functions to extract interesting features from trajectories. We first look for Static features : * `length` returns the length of the trajectory *traj** `barycenter` returns the barycenter of the trajectory *traj* in shape (x,y)* `location` returns the average distance of each point to the barycenter of the trajectory *traj** `location_max` returns the maximum distance between a point of the trajectory and the barycenter of this trajectory* `orientation` returns the angle between points the line between *(x1, y1)* and *(x2, y2)* and the horizontal axis (in degrees)* `orientation_feat` returns the preceeding feature for the first two points and the last two points of the trajectory *traj** `nb_turns` returns the number of turns in the trajectory *traj*, where a turn is detected if the orientation between two consecutive couples of points varies of more than *limit_angle*
###Code
def length(traj):
l = 0
for i in range(len(traj)-1):
l += np.sqrt((traj[i+1][0]-traj[i][0])**2 + (traj[i+1][1]-traj[i][1])**2)
return l
def barycenter(traj):
x = 0
y = 0
n = len(traj)
for i in range(n):
x += traj[i][0]
y += traj[i][1]
if n>0:
return (x/n, y/n)
else:
return (0,0)
def location(traj):
loc_avg = 0
n = len(traj)
p = barycenter(traj)
for i in range(n):
loc_avg += np.sqrt((traj[i][0] - p[0])**2 + (traj[i][1] - p[1])**2)
return loc_avg/n
def location_max(traj):
n = len(traj)
p = barycenter(traj)
if n>0:
l_max = np.max([np.sqrt((traj[i][0] - p[0])**2 + (traj[i][1] - p[1])**2) for i in range(n)])
return l_max
else:
return 0
def orientation(x1, x2 , y1, y2):
if x2 == x1 and y2>=y1:
return 90
elif x2 == x1 and y2<=y1:
return -90
else:
return np.arctan((y2 - y1)/(x2 - x1)) * (180/np.pi) #in degree
def orientation_feat(traj):
n = len(traj)
if n>1:
ts = orientation(traj[0][0], traj[1][0], traj[0][1], traj[1][1])
te = orientation(traj[-2][0], traj[-1][0], traj[-2][1], traj[-1][1])
return (ts, te)
else:
return (0,0)
def nb_turns(traj, limit_angle):
nb_turns = 0
n=len(traj)
for i in range(n-2):
if(np.abs(orientation(traj[i][0], traj[i+1][0], traj[i][1], traj[i+1][1]) - orientation(traj[i+1][0], traj[i+2][0], traj[i+1][1], traj[i+2][1])) > limit_angle):
nb_turns += 1
return nb_turns
###Output
_____no_output_____
###Markdown
We then define dynamic features:* `velocity` returns the list of the point to point velocities over the whole trajectory *traj** `velocity_avg` returns the average velocity over the trajectory *traj** `velocity_max` returns the greatest velocity over the trajectory *traj** `velocity_min` returns the lowest velocity over the trajectory *traj** `nb_vmin` returns the number of local minimum of velocity* `nb_vmax` returns the number of local maximum of velocity
###Code
def velocity(traj):
velocity = []
for i in range(len(traj) - 1):
v = norm(np.array(traj)[i+1][:2] - np.array(traj)[i][:2]) / (np.array(traj)[i+1][2] - np.array(traj)[i][2])
velocity.append(v)
return np.array(velocity)
def velocity_avg(traj):
v_avg = 0
n = len(traj)
if n>1:
v_list = velocity(traj)
for i in range(n-1):
v_avg += v_list[i]
return v_avg/(n-1)
else:
return 0
def velocity_max(traj):
if len(traj)>1:
return np.max(velocity(traj))
else:
return 0
def velocity_min(traj):
if len(traj)>1:
return np.min(velocity(traj))
else:
return 0
def nb_vmin(traj):
nb = 0
v_list = velocity(traj)
for i in range(1,len(v_list)-1):
if v_list[i]<v_list[i+1] and v_list[i]<v_list[i-1]:
nb += 1
return nb
def nb_vmax(traj):
nb = 0
v_list = velocity(traj)
for i in range(1,len(v_list)-1):
if v_list[i]>v_list[i+1] and v_list[i]>v_list[i-1]:
nb += 1
return nb
###Output
_____no_output_____
###Markdown
The function `feature_vector` extracts features from the trajectory in argument *traj = [(x,y)]*
###Code
def bucketize_nb_turns(nb_turn):
if nb_turn == 0:
return [1, 0, 0, 0]
elif nb_turn == 1:
return [0, 1, 0, 0]
elif nb_turn < 4: # 2 ou 3
return [0, 0, 1, 0]
else:
return [0, 0, 0, 1] #4 ou plus
def bucketize_nb_v(nb_v):
if nb_v < 2:
return [1, 0, 0, 0]
elif nb_v < 4: # 2 ou 3
return [0, 1, 0, 0]
elif nb_v < 6: # 4 ou 5
return [0, 0, 1, 0]
else:
return [0, 0, 0, 1] # 6 ou plus
def feature_vector(traj, playerID, game_area, limit_angle=0.25):
diag = np.sqrt(game_area[0]**2 + game_area[1]**2)
feature_vector = [playerID]
feature_vector.append(length(traj)/diag) #1
bc = barycenter(traj)
feature_vector.append(np.float64(0.5 + bc[0] / game_area[0])) #2 between 0 and 1
feature_vector.append(np.float64(0.5 + bc[1] / game_area[1])) #3
if location_max(traj) == 0:
feature_vector.append(np.float64(0)) #4
else:
feature_vector.append(location(traj)/location_max(traj)) #4
angles = 0.5 + np.array(orientation_feat(traj)) / 180 # between 0 and 1
feature_vector.append(angles[0]) #first orientation of traj #5
feature_vector.append(angles[1]) #last orientation of traj #6
feature_vector.append(nb_turns(traj, limit_angle)) #7
feature_vector.append(velocity_avg(traj)) #8
feature_vector.append(velocity_min(traj)) #9
feature_vector.append(velocity_max(traj)) #10
feature_vector.append(nb_vmin(traj)) #11
feature_vector.append(nb_vmax(traj)) #12
return feature_vector
feature_vector(sub_trajectories('game_file')[0],1,[21,10])
def feature_vector_bucket(traj, playerID, game_area, limit_angle=0.25):
diag = np.sqrt(game_area[0]**2 + game_area[1]**2)
feature_vector = [playerID]
feature_vector.append(length(traj)/diag)
bc = barycenter(traj)
feature_vector.append(np.float64(0.5 + bc[0] / game_area[0])) # between 0 and 1
feature_vector.append(np.float64(0.5 + bc[1] / game_area[1]))
if location_max(traj) == 0:
feature_vector.append(np.float64(0))
else:
feature_vector.append(location(traj)/location_max(traj))
angles = 0.5 + np.array(orientation_feat(traj)) / 180 # between 0 and 1
feature_vector.append(angles[0]) #first orientation of traj
feature_vector.append(angles[1]) #last orientation of traj
bucket = bucketize_nb_turns(nb_turns(traj, limit_angle))
for i in bucket:
feature_vector.append(i)
v_max = velocity_max(traj)
if v_max == 0:
feature_vector.append(0)
feature_vector.append(0)
feature_vector.append(0)
else:
feature_vector.append(velocity_avg(traj) / v_max)
feature_vector.append(velocity_min(traj) / v_max)
feature_vector.append(v_max)
bucket_min = bucketize_nb_v(nb_vmin(traj))
bucket_max = bucketize_nb_v(nb_vmax(traj))
for i in bucket_min:
feature_vector.append(i)
for j in bucket_max:
feature_vector.append(j)
return feature_vector
###Output
_____no_output_____
###Markdown
The function `feature_vectors_game` allows to create the feature vectors over all the trajectories between the gathering of two bubbles of one game. The returned array is an array of multiple 13x5 arrays (the five feature vectors, containing 13 features each, corresponding to the five trajectories of each wave).
###Code
def feature_vectors_game(game_file, game_area = [21,10]):
trajectories = np.array(sub_trajectories(game_file))
nb_waves = len(trajectories)
playerID = int(parse_root(game_file)[2][0].text)
vectors = [[]]
for i in range(0,nb_waves):
vectors.append([])
for traj in trajectories[i]:
vectors[i].append(feature_vector(traj, playerID, game_area))
return np.array(vectors)
###Output
_____no_output_____
###Markdown
simple_features_generator gets a list with the paths to be considered and returns (and saves) a list of features and a list of outputs. It does almost the same thing as feature_vectors_game_concat, but with a better result
###Code
def simple_features_generator(game_list):
features=[]
labels=[]
for file in game_list:
for layer1 in feature_vectors_game(file):
for layer2 in layer1:
features.append(layer2[1:])
labels.append(layer2[0])
np.savetxt('features.csv', features, delimiter=",")
np.savetxt('output.csv', labels, delimiter=",")
return features, labels
###Output
_____no_output_____
###Markdown
The following functions provide different shapes for the feature vector. This way of creating the feature vector could be improved by using tensorflow and its feature vectors, instead of creating it "by hand".* "concat" means all features are concatenated into one numpy vector for each sample* "bucket" means it uses the bucketized version of the feature vector (for nb_turns, nb_vmin, nb_vmax)* "hands" means it uses the hand used to play as label instead of the player's ID
###Code
def feature_vectors_game_concat(game_file, game_area = [21,10]):
trajectories = np.array(sub_trajectories(game_file))
nb_waves = len(trajectories)
playerID = int(parse_root(game_file)[2][0].text)
vectors = []
for i in range(nb_waves):
vectors.append([])
for traj in trajectories[i]:
vectors[i] = vectors[i] + list(feature_vector(traj, playerID, game_area)[1:])
vectors[i].append(playerID)
return np.array(vectors)
def feature_vectors_bucket_game_concat(game_file, game_area = [21,10]):
trajectories = np.array(sub_trajectories(game_file))
nb_waves = len(trajectories)
playerID = int(parse_root(game_file)[2][0].text)
vectors = []
for i in range(nb_waves):
vectors.append([])
for traj in trajectories[i]:
vectors[i] = vectors[i] + list(feature_vector_bucket(traj, playerID, game_area)[1:])
vectors[i].append(playerID)
return np.array(vectors)
def feature_vectors_bucket_game_concat_hands(game_file, game_area = [21,10]):
trajectories = np.array(sub_trajectories(game_file))
nb_waves = len(trajectories)
if parse_root(game_file)[2][2].text == 'false':
useRightHand = 0
else:
useRightHand = 1
vectors = []
for i in range(nb_waves):
vectors.append([])
for traj in trajectories[i]:
vectors[i] = vectors[i] + list(feature_vector_bucket(traj, useRightHand, game_area)[1:])
vectors[i].append(useRightHand)
return np.array(vectors)
###Output
_____no_output_____
###Markdown
Finally we provide a function to get the agregation of all feature vectors over multiple game files, where *game_files* is the list of the names (String type) of all the game files to be considered.
###Code
def agregate_feature_vectors(game_files):
vectors = []
for file in game_files:
vectors = vectors + list(feature_vectors_game_concat(file))
return np.array(vectors)
def agregate_feature_vectors_bucket(game_files):
vectors = []
for file in game_files:
vectors = vectors + list(feature_vectors_bucket_game_concat(file))
return np.array(vectors)
def agregate_feature_vectors_bucket_hands(game_files):
vectors = []
for file in game_files:
vectors = vectors + list(feature_vectors_bucket_game_concat_hands(file))
return np.array(vectors)
###Output
_____no_output_____
###Markdown
Export of the final data
###Code
def export_feature_vectors(vectors, name):
np.savetxt(name, vectors, delimiter=",")
relative_path = 'C:/Users/menoci/Desktop/Studies/autisme et ML/Code+Data/xml_data/'
print(relative_path+'abc.xml')
relative_path = 'C:/Users/menoci/Desktop/Studies/autisme et ML/Kinect Project/Code+Data/xml_data/'
game_files=[relative_path+'paul_main_droite_1.xml',
relative_path+'paul_main_droite_2.xml',
relative_path+'paul_main_droite_3.xml',
relative_path+'paul_main_droite_4.xml',
relative_path+'paul_main_gauche_1.xml',
relative_path+'paul_main_gauche_2.xml',
relative_path+'paul_main_gauche_3.xml',
relative_path+'paul_main_gauche_4.xml',
relative_path+'sarah_main_droite_1.xml',
relative_path+'sarah_main_droite_2.xml',
relative_path+'sarah_main_droite_3.xml',
relative_path+'sarah_main_droite_4.xml',
relative_path+'sarah_main_gauche_1.xml',
relative_path+'sarah_main_gauche_2.xml',
relative_path+'sarah_main_gauche_3.xml',
relative_path+'sarah_main_gauche_4.xml',
relative_path+'julien_main_droite_1.xml',
relative_path+'julien_main_droite_2.xml',
relative_path+'julien_main_droite_3.xml',
relative_path+'julien_main_droite_4.xml',
relative_path+'julien_main_gauche_1.xml',
relative_path+'julien_main_gauche_2.xml',
relative_path+'julien_main_gauche_3.xml',
relative_path+'julien_main_gauche_4.xml']
vect = agregate_feature_vectors_bucket(game_files)
export_feature_vectors(vect, "kate_data.csv")
game_files=[relative_path+'sarah_main_droite_1.xml',
relative_path+'sarah_main_droite_2.xml',
relative_path+'sarah_main_droite_3.xml',
relative_path+'sarah_main_droite_4.xml',
relative_path+'sarah_main_gauche_1.xml',
relative_path+'sarah_main_gauche_2.xml',
relative_path+'sarah_main_gauche_3.xml',
relative_path+'sarah_main_gauche_4.xml',
relative_path+'julien_main_droite_1.xml',
relative_path+'julien_main_droite_2.xml',
relative_path+'julien_main_droite_3.xml',
relative_path+'julien_main_droite_4.xml',
relative_path+'julien_main_gauche_1.xml',
relative_path+'julien_main_gauche_2.xml',
relative_path+'julien_main_gauche_3.xml',
relative_path+'julien_main_gauche_4.xml']
#simple_features_generator(game_files)
###Output
_____no_output_____ |
notebook/onnx-pipeline.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ONNX Pipeline This repository shows how to deploy and use ONNX pipeline with dockers including convert model, generate input and performance test. Prerequisites Pull dockers from Azure. It should take several minutes.
###Code
# !sh build.sh # For Linux
!build.sh # For Windows
###Output
_____no_output_____
###Markdown
Install the onnxpipeline SDK
###Code
import sys
sys.path.append('../utils')
import onnxpipeline
# Initiate ONNX pipeline with local directory "model"
pipeline = onnxpipeline.Pipeline('model')
# onnx
#pipeline = onnxpipeline.Pipeline('onnx')
# tensorflow
#pipeline = onnxpipeline.Pipeline('mnist/model')
# pytorch
#pipeline = onnxpipeline.Pipeline('pytorch')
# cntk
#pipeline = onnxpipeline.Pipeline('cntk')
# keras
#pipeline = onnxpipeline.Pipeline('KerasToONNX')
# sklearn
#pipeline = onnxpipeline.Pipeline('sklearn')
# caffe
#pipeline = onnxpipeline.Pipeline('caffe')
# current directory
#pipeline = onnxpipeline.Pipeline()
# test mxnet fail
#pipeline = onnxpipeline.Pipeline('mxnet')
###Output
_____no_output_____
###Markdown
Run parameters(1) local_directory: string Required. The path of local directory where would be mounted to the docker. All operations will be executed from this path.(2) mount_path: string Optional. The path where the local_directory will be mounted in the docker. Default is "/mnt/model".(3) print_logs: boolean Optional. Whether print the logs from the docker. Default is True. (4) convert_directory: string Optional. The directory path for converting model. Default is test/. (5) convert_name: string Optional. The model name for converting model. Default is model.onnx. Config information for ONNX pipeline
###Code
pipeline.config()
###Output
-----------config----------------
Container information: <docker.client.DockerClient object at 0x000001E5C77FD6D8>
Local directory path for volume: E:\onnx-pipeline\notebook/model
Volume directory path in dockers: /mnt/model
Result path: result
Converted directory path: test
Converted model filename: model.onnx
Converted model path: test/model.onnx
Print logs in the docker: True
###Markdown
Convert model to ONNX This image is used to convert model from major model frameworks to onnx. Supported frameworks are - caffe, cntk, coreml, keras, libsvm, mxnet, scikit-learn, tensorflow and pytorch.You can run the docker image with customized parameters.
###Code
model = pipeline.convert_model(model='model.onnx', model_type='onnx')
# test tensorflow
#model = pipeline.convert_model(model_type='tensorflow')
# test pytorch
#model = pipeline.convert_model(model_type='pytorch', model='saved_model.pb', model_input_shapes='(1,3,224,224)')
# test cntk
#model = pipeline.convert_model(model_type='cntk', model='ResNet50_ImageNet_Caffe.model')
# test keras
#model = pipeline.convert_model(model_type='keras', model='keras_Average_ImageNet.keras')
# test sklearn
#model = pipeline.convert_model(model_type='scikit-learn', model='sklearn_svc.joblib', initial_types=("float_input", "FloatTensorType([1,4])"))
# test caffe
#model = pipeline.convert_model(model_type='caffe', model='bvlc_alexnet.caffemodel')
# test mxnet
#model = pipeline.convert_model(model_type='mxnet', model='resnet.json', model_params='resnet.params', model_input_shapes='(1,3,224,224)')
###Output
-------------
Model Conversion
Input model is already ONNX model. Skipping conversion.
-------------
MODEL INPUT GENERATION(if needed)
Input.pb already exists. Skipping dummy input generation.
-------------
MODEL CORRECTNESS VERIFICATION
Check the ONNX model for validity
The ONNX model is valid.
The original model is already onnx. Skipping correctness test.
-------------
MODEL CONVERSION SUMMARY (.json file generated at /mnt/model/test/output.json )
{'conversion_status': 'SUCCESS',
'correctness_verified': 'SKIPPED',
'error_message': '',
'input_folder': '/mnt/model/test/test_data_set_0',
'output_onnx_path': '/mnt/model/test/model.onnx'}
###Markdown
Run parameters(1) model: string Required. The path of the model that needs to be converted. **IMPORTANT Only support the model path which is under the mounting directory (while initialization by Pipeline()).(2) output_onnx_path: string Required. The path to store the converted onnx model. Should end with ".onnx". e.g. output.onnx(3) model_type: string Required. The name of original model framework. Available types are caffe, cntk, coreml, keras, libsvm, mxnet, scikit-learn, tensorflow and pytorch.(4) model_inputs_names: string (tensorflow) Optional. The model's input names. Required for tensorflow frozen models and checkpoints.(5) model_outputs_names: string (tensorflow) Optional. The model's output names. Required for tensorflow frozen models checkpoints.(6) model_params: string (mxnet) Optional. The params of the model if needed.(7) model_input_shapes: list of tuple (pytorch, mxnet) Optional. List of tuples. The input shape(s) of the model. Each dimension separated by ','.(8) target_opset: String Optional. Specifies the opset for ONNX, for example, 7 for ONNX 1.2, and 8 for ONNX 1.3. Defaults to '7'. (9) caffe_model_prototxt: string (caffe) Optional. The filename of deploy prototxt for the caffe madel. (10) initial_types: tuple (string, string) (scikit-learn) Optional. A tuple consist two strings. The first is data type and the second is the size of tensor type e.g., ('float_input', 'FloatTensorType([1,4])')(11) input_json: string Optional. Use JSON file as input parameters. **IMPORTANT Only support the path which is under the mounting directory (while initialization by Pipeline()). (12) model_inputs_names: string (tensorflow) Optional. (13) model_outputs_names: string (tensorflow) Optional. Performance test tool You can run perf_tuning using command python perf_tuning.py [Your model path] [Output path on the docker]. You can use the same arguments as for onnxruntime_pert_test tool, e.g. -m for mode, -e to specify execution provider etc. By default it will try all providers available.
###Code
result = pipeline.perf_tuning(model=model)
#result = pipeline.perf_tuning() # is ok, too
###Output
2019-08-30 18:21:00.936705200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936788600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936795600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936800300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 1
Total time cost:0.0617255
Total iterations:20
Average time cost:3.08627 ms
2019-08-30 18:21:01.180742700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180851500 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180858800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180863100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 1
Total time cost:0.0253154
Total iterations:20
Average time cost:1.26577 ms
2019-08-30 18:21:01.398435500 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398513000 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398533400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398537800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0255808
Total iterations:20
Average time cost:1.27904 ms
2019-08-30 18:21:01.705239300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705295800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705302800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705307200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.058074
Total iterations:20
Average time cost:2.9037 ms
2019-08-30 18:21:01.955712300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955766600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955773400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955777700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0248691
Total iterations:20
Average time cost:1.24345 ms
2019-08-30 18:21:02.209768100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209824700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209846600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209850800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.060125
Total iterations:20
Average time cost:3.00625 ms
2019-08-30 18:21:02.458939400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.458994400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.459016100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.459020300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0262749
Total iterations:20
Average time cost:1.31375 ms
2019-08-30 18:21:02.701319700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701414200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701434200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701438300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0529148
Total iterations:20
Average time cost:2.64574 ms
2019-08-30 18:21:02.941667600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941721200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941728100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941732600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0249799
Total iterations:20
Average time cost:1.24899 ms
2019-08-30 18:21:03.158992100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159077300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159098000 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159102300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0293959
Total iterations:20
Average time cost:1.4698 ms
###Markdown
Run parameters(1) model: string Required. The path of the model that wants to be performed. (2) result: string Optional. The path of the result. (3) config: string (choices=["Debug", "MinSizeRel", "Release", "RelWithDebInfo"]) Optional. Configuration to run. Default is "RelWithDebInfo". (4) test_mode: string (choices=["duration", "times"]) Optional. Specifies the test mode. Value could be 'duration' or 'times'. Default is "times".(5) execution_provider: string (choices=["cpu", "cuda", "mkldnn"]) Optional. help="Specifies the provider 'cpu','cuda','mkldnn'. Default is ''. (6) repeated_times: integer Optional. Specifies the repeated times if running in 'times' test mode. Default:20. (7) duration_times: integer Optional. Specifies the seconds to run for 'duration' mode. Default:10. (8) parallel: boolean Optional. Use parallel executor, default (without -x): sequential executor. (9) intra_op_num_threads: integer Optional. Sets the number of threads used to parallelize the execution within nodes, A value of 0 means ORT will pick a default. Must >=0. (10) inter_op_num_threads: integer Optional. Sets the number of threads used to parallelize the execution of the graph (across nodes), A value of 0 means ORT will pick a default. Must >=0. (11) top_n: integer Optional. Show percentiles for top n runs. Default:5. (12) runtime: boolean Optional. Use this boolean flag to enable GPU if you have one. (13) input_json: string Optional. Use JSON file as input parameters.
###Code
pipeline.print_performance(result)
#pipeline.print_result() # is ok, too
###Output
mkldnn_openmp_OMP_WAIT_POLICY_active 0.7167509999999999 ms
mkldnn_openmp 0.97501 ms
mkldnn_openmp_1_threads_OMP_WAIT_POLICY_passive 1.2467949999999999 ms
mkldnn 0.903808 ms
mkldnn_1_threads 0.75656 ms
mkldnn_parallel_1_threads 5.793315000000001 ms
mklml_1_threads 1.280607 ms
mklml 1.27904 ms
mklml_parallel_1_threads 3.0862749999999997 ms
cpu_openmp_1_threads_OMP_WAIT_POLICY_active 1.2905449999999998 ms
cpu_openmp_1_threads 1.2489949999999999 ms
cpu_openmp_1_threads_OMP_WAIT_POLICY_passive 1.3137450000000002 ms
cpu_1_threads 1.451031 ms
cpu 1.304 ms
cpu_parallel_1_threads 3.044285 ms
ngraph_parallel_1_threads error
ngraph_1_threads error
ngraph error
###Markdown
Result Visualization After performance test, there would be a directory for results. This libray use Pandas.read_json to visualize JSON file. (orient is changeable.)"latency.json" contains the raw data of results ordered by the average time. Use .latency to obtain the original latency JSON; Use .profiling to obtain original top 5 profiling JSON.
###Code
r = pipeline.get_result(result)
#r.latency
#r.profiling
print(result)
###Output
E:\onnx-pipeline\notebook/model/result
###Markdown
Print latency.json Provide parameters for top 5 performace. Use the parameter "top" to visualize more results.
###Code
r.prints()
###Output
_____no_output_____
###Markdown
Print profiling.json Only provide profiling JSON for top 5 performace by giving certain index of the result. The file name is profile_[name].json(1) index: integer Required. The index for top 5 profiling files.(2) top: integer The number for top Ops.
###Code
r.print_profiling(index=4, top=5)
###Output
_____no_output_____
###Markdown
Get code snippets
###Code
print(r.get_code(ep='cpu'))
###Output
import onnxruntime as ort
so = rt.SessionOptions()
so.set_graph_optimization_level(99)
so.enable_sequential_execution = False
so.session_thread_pool_size(0)
session = rt.Session("/mnt/model/test/model.onnx", so)
###Markdown
netron
###Code
# only workable for notebook in the local server
import netron
netron.start("model/test/model.onnx", browse=False) # 'model.onnx'
from IPython.display import IFrame
IFrame('http://localhost:8080', width="100%", height=1000)
netron.stop()
###Output
Stopping http://localhost:8080
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ONNX Pipeline This repository shows how to deploy and use ONNX pipeline with dockers including convert model, generate input and performance test. Prerequisites Pull dockers from Azure. It should take several minutes.
###Code
# !sh build.sh # For Linux
!build.sh # For Windows
###Output
_____no_output_____
###Markdown
Install the onnxpipeline SDK
###Code
import sys
sys.path.append('../utils')
import onnxpipeline
# Initiate ONNX pipeline with local directory "model"
pipeline = onnxpipeline.Pipeline('model')
# onnx
#pipeline = onnxpipeline.Pipeline('onnx')
# tensorflow
#pipeline = onnxpipeline.Pipeline('mnist/model')
# pytorch
#pipeline = onnxpipeline.Pipeline('pytorch')
# cntk
#pipeline = onnxpipeline.Pipeline('cntk')
# keras
#pipeline = onnxpipeline.Pipeline('KerasToONNX')
# sklearn
#pipeline = onnxpipeline.Pipeline('sklearn')
# caffe
#pipeline = onnxpipeline.Pipeline('caffe')
# current directory
#pipeline = onnxpipeline.Pipeline()
# test mxnet fail
#pipeline = onnxpipeline.Pipeline('mxnet')
###Output
_____no_output_____
###Markdown
Run parameters(1) local_directory: string Required. The path of local directory where would be mounted to the docker. All operations will be executed from this path.(2) mount_path: string Optional. The path where the local_directory will be mounted in the docker. Default is "/mnt/model".(3) print_logs: boolean Optional. Whether print the logs from the docker. Default is True. (4) convert_directory: string Optional. The directory path for converting model. Default is test/. (5) convert_name: string Optional. The model name for converting model. Default is model.onnx. Config information for ONNX pipeline
###Code
pipeline.config()
###Output
-----------config----------------
Container information: <docker.client.DockerClient object at 0x000001E5C77FD6D8>
Local directory path for volume: E:\onnx-pipeline\notebook/model
Volume directory path in dockers: /mnt/model
Result path: result
Converted directory path: test
Converted model filename: model.onnx
Converted model path: test/model.onnx
Print logs in the docker: True
###Markdown
Convert model to ONNX This image is used to convert model from major model frameworks to onnx. Supported frameworks are - caffe, cntk, coreml, keras, libsvm, mxnet, scikit-learn, tensorflow and pytorch.You can run the docker image with customized parameters.
###Code
model = pipeline.convert_model(model='model.onnx', model_type='onnx')
# test tensorflow
#model = pipeline.convert_model(model_type='tensorflow')
# test pytorch
#model = pipeline.convert_model(model_type='pytorch', model='saved_model.pb', model_input_shapes='(1,3,224,224)')
# test cntk
#model = pipeline.convert_model(model_type='cntk', model='ResNet50_ImageNet_Caffe.model')
# test keras
#model = pipeline.convert_model(model_type='keras', model='keras_Average_ImageNet.keras')
# test sklearn
#model = pipeline.convert_model(model_type='scikit-learn', model='sklearn_svc.joblib', initial_types=("float_input", "FloatTensorType([1,4])"))
# test caffe
#model = pipeline.convert_model(model_type='caffe', model='bvlc_alexnet.caffemodel')
# test mxnet
#model = pipeline.convert_model(model_type='mxnet', model='resnet.json', model_params='resnet.params', model_input_shapes='(1,3,224,224)')
###Output
-------------
Model Conversion
Input model is already ONNX model. Skipping conversion.
-------------
MODEL INPUT GENERATION(if needed)
Input.pb already exists. Skipping dummy input generation.
-------------
MODEL CORRECTNESS VERIFICATION
Check the ONNX model for validity
The ONNX model is valid.
The original model is already onnx. Skipping correctness test.
-------------
MODEL CONVERSION SUMMARY (.json file generated at /mnt/model/test/output.json )
{'conversion_status': 'SUCCESS',
'correctness_verified': 'SKIPPED',
'error_message': '',
'input_folder': '/mnt/model/test/test_data_set_0',
'output_onnx_path': '/mnt/model/test/model.onnx'}
###Markdown
Run parameters(1) model: string Required. The path of the model that needs to be converted. **IMPORTANT Only support the model path which is under the mounting directory (while initialization by Pipeline()).(2) output_onnx_path: string Required. The path to store the converted onnx model. Should end with ".onnx". e.g. output.onnx(3) model_type: string Required. The name of original model framework. Available types are caffe, cntk, coreml, keras, libsvm, mxnet, scikit-learn, tensorflow and pytorch.(4) model_inputs_names: string (tensorflow) Optional. The model's input names. Required for tensorflow frozen models and checkpoints.(5) model_outputs_names: string (tensorflow) Optional. The model's output names. Required for tensorflow frozen models checkpoints.(6) model_params: string (mxnet) Optional. The params of the model if needed.(7) model_input_shapes: list of tuple (pytorch, mxnet) Optional. List of tuples. The input shape(s) of the model. Each dimension separated by ','.(8) target_opset: String Optional. Specifies the opset for ONNX, for example, 7 for ONNX 1.2, and 8 for ONNX 1.3. Defaults to '7'. (9) caffe_model_prototxt: string (caffe) Optional. The filename of deploy prototxt for the caffe madel. (10) initial_types: tuple (string, string) (scikit-learn) Optional. A tuple consist two strings. The first is data type and the second is the size of tensor type e.g., ('float_input', 'FloatTensorType([1,4])')(11) input_json: string Optional. Use JSON file as input parameters. **IMPORTANT Only support the path which is under the mounting directory (while initialization by Pipeline()). (12) model_inputs_names: string (tensorflow) Optional. (13) model_outputs_names: string (tensorflow) Optional. Performance test tool You can run perf_tuning using command python perf_tuning.py [Your model path] [Output path on the docker]. You can use the same arguments as for onnxruntime_pert_test tool, e.g. -m for mode, -e to specify execution provider etc. By default it will try all providers available.
###Code
result = pipeline.perf_tuning(model=model)
#result = pipeline.perf_tuning() # is ok, too
###Output
2019-08-30 18:21:00.936705200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936788600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936795600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:00.936800300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 1
Total time cost:0.0617255
Total iterations:20
Average time cost:3.08627 ms
2019-08-30 18:21:01.180742700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180851500 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180858800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.180863100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 1
Total time cost:0.0253154
Total iterations:20
Average time cost:1.26577 ms
2019-08-30 18:21:01.398435500 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398513000 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398533400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.398537800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0255808
Total iterations:20
Average time cost:1.27904 ms
2019-08-30 18:21:01.705239300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705295800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705302800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.705307200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.058074
Total iterations:20
Average time cost:2.9037 ms
2019-08-30 18:21:01.955712300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955766600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955773400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:01.955777700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0248691
Total iterations:20
Average time cost:1.24345 ms
2019-08-30 18:21:02.209768100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209824700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209846600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.209850800 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.060125
Total iterations:20
Average time cost:3.00625 ms
2019-08-30 18:21:02.458939400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.458994400 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.459016100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.459020300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0262749
Total iterations:20
Average time cost:1.31375 ms
2019-08-30 18:21:02.701319700 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701414200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701434200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.701438300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0529148
Total iterations:20
Average time cost:2.64574 ms
2019-08-30 18:21:02.941667600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941721200 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941728100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:02.941732600 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0249799
Total iterations:20
Average time cost:1.24899 ms
2019-08-30 18:21:03.158992100 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159077300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/hidden_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159098000 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm1/cell_init'. It is not used by any node and should be removed from the model.
2019-08-30 18:21:03.159102300 [W:onnxruntime:Default, graph.cc:2263 CleanUnusedInitializers] Removing initializer 'lstm2/hidden_init'. It is not used by any node and should be removed from the model.
Setting thread pool size to 0
Total time cost:0.0293959
Total iterations:20
Average time cost:1.4698 ms
###Markdown
Run parameters(1) model: string Required. The path of the model that wants to be performed. (2) result: string Optional. The path of the result. (3) config: string (choices=["Debug", "MinSizeRel", "Release", "RelWithDebInfo"]) Optional. Configuration to run. Default is "RelWithDebInfo". (4) mode: string (choices=["duration", "times"]) Optional. Specifies the test mode. Value could be 'duration' or 'times'. Default is "times".(5) execution_provider: string (choices=["cpu", "cuda", "mkldnn"]) Optional. help="Specifies the provider 'cpu','cuda','mkldnn'. Default is ''. (6) repeated_times: integer Optional. Specifies the repeated times if running in 'times' test mode. Default:20. (7) duration_times: integer Optional. Specifies the seconds to run for 'duration' mode. Default:10. (8) parallel: boolean Optional. Use parallel executor, default (without -x): sequential executor. (9) threadpool_size: integer Optional. Threadpool size if parallel executor (--parallel) is enabled. Default is the number of cores. (10) num_threads: integer Optional. OMP_NUM_THREADS value. (11) top_n: integer Optional. Show percentiles for top n runs. Default:5. (12) runtime: boolean Optional. Use this boolean flag to enable GPU if you have one. (13) input_json: string Optional. Use JSON file as input parameters.
###Code
pipeline.print_performance(result)
#pipeline.print_result() # is ok, too
###Output
mkldnn_openmp_OMP_WAIT_POLICY_active 0.7167509999999999 ms
mkldnn_openmp 0.97501 ms
mkldnn_openmp_1_threads_OMP_WAIT_POLICY_passive 1.2467949999999999 ms
mkldnn 0.903808 ms
mkldnn_1_threads 0.75656 ms
mkldnn_parallel_1_threads 5.793315000000001 ms
mklml_1_threads 1.280607 ms
mklml 1.27904 ms
mklml_parallel_1_threads 3.0862749999999997 ms
cpu_openmp_1_threads_OMP_WAIT_POLICY_active 1.2905449999999998 ms
cpu_openmp_1_threads 1.2489949999999999 ms
cpu_openmp_1_threads_OMP_WAIT_POLICY_passive 1.3137450000000002 ms
cpu_1_threads 1.451031 ms
cpu 1.304 ms
cpu_parallel_1_threads 3.044285 ms
ngraph_parallel_1_threads error
ngraph_1_threads error
ngraph error
###Markdown
Result Visualization After performance test, there would be a directory for results. This libray use Pandas.read_json to visualize JSON file. (orient is changeable.)"latency.json" contains the raw data of results ordered by the average time. Use .latency to obtain the original latency JSON; Use .profiling to obtain original top 5 profiling JSON.
###Code
r = pipeline.get_result(result)
#r.latency
#r.profiling
print(result)
###Output
E:\onnx-pipeline\notebook/model/result
###Markdown
Print latency.json Provide parameters for top 5 performace. Use the parameter "top" to visualize more results.
###Code
r.prints()
###Output
_____no_output_____
###Markdown
Print profiling.json Only provide profiling JSON for top 5 performace by giving certain index of the result. The file name is profile_[name].json(1) index: integer Required. The index for top 5 profiling files.(2) top: integer The number for top Ops.
###Code
r.print_profiling(index=4, top=5)
###Output
_____no_output_____
###Markdown
Get code snippets
###Code
print(r.get_code(ep='cpu'))
###Output
import onnxruntime as ort
so = rt.SessionOptions()
so.set_graph_optimization_level(3)
so.enable_sequential_execution = False
so.session_thread_pool_size(0)
session = rt.Session("/mnt/model/test/model.onnx", so)
###Markdown
netron
###Code
# only workable for notebook in the local server
import netron
netron.start("model/test/model.onnx", browse=False) # 'model.onnx'
from IPython.display import IFrame
IFrame('http://localhost:8080', width="100%", height=1000)
netron.stop()
###Output
Stopping http://localhost:8080
|
Data Science Resources/Jose portila - ML/05-Seaborn/08-Seaborn-Exercise-Solutions.ipynb | ###Markdown
______Copyright by Pierian Data Inc.For more information, visit us at www.pieriandata.com Seaborn Exercises - Solutions ImportsRun the cell below to import the libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The DataDATA SOURCE: https://www.kaggle.com/rikdifos/credit-card-approval-predictionData Information:Credit score cards are a common risk control method in the financial industry. It uses personal information and data submitted by credit card applicants to predict the probability of future defaults and credit card borrowings. The bank is able to decide whether to issue a credit card to the applicant. Credit scores can objectively quantify the magnitude of risk. Feature Information:application_record.csvFeature nameExplanationRemarksIDClient numberCODE_GENDERGenderFLAG_OWN_CARIs there a carFLAG_OWN_REALTYIs there a propertyCNT_CHILDRENNumber of childrenAMT_INCOME_TOTALAnnual incomeNAME_INCOME_TYPEIncome categoryNAME_EDUCATION_TYPEEducation levelNAME_FAMILY_STATUSMarital statusNAME_HOUSING_TYPEWay of livingDAYS_BIRTHBirthdayCount backwards from current day (0), -1 means yesterdayDAYS_EMPLOYEDStart date of employmentCount backwards from current day(0). If positive, it means the person currently unemployed.FLAG_MOBILIs there a mobile phoneFLAG_WORK_PHONEIs there a work phoneFLAG_PHONEIs there a phoneFLAG_EMAILIs there an emailOCCUPATION_TYPEOccupationCNT_FAM_MEMBERSFamily size
###Code
df = pd.read_csv('application_record.csv')
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 438557 entries, 0 to 438556
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 438557 non-null int64
1 CODE_GENDER 438557 non-null object
2 FLAG_OWN_CAR 438557 non-null object
3 FLAG_OWN_REALTY 438557 non-null object
4 CNT_CHILDREN 438557 non-null int64
5 AMT_INCOME_TOTAL 438557 non-null float64
6 NAME_INCOME_TYPE 438557 non-null object
7 NAME_EDUCATION_TYPE 438557 non-null object
8 NAME_FAMILY_STATUS 438557 non-null object
9 NAME_HOUSING_TYPE 438557 non-null object
10 DAYS_BIRTH 438557 non-null int64
11 DAYS_EMPLOYED 438557 non-null int64
12 FLAG_MOBIL 438557 non-null int64
13 FLAG_WORK_PHONE 438557 non-null int64
14 FLAG_PHONE 438557 non-null int64
15 FLAG_EMAIL 438557 non-null int64
16 OCCUPATION_TYPE 304354 non-null object
17 CNT_FAM_MEMBERS 438557 non-null float64
dtypes: float64(2), int64(8), object(8)
memory usage: 60.2+ MB
###Markdown
TASKS Recreate the plots shown in the markdown image cells. Each plot also contains a brief description of what it is trying to convey. Note, these are meant to be quite challenging. Start by first replicating the most basic form of the plot, then attempt to adjust its styling and parameters to match the given image.In general do not worry about coloring,styling, or sizing matching up exactly. Instead focus on the content of the plot itself. Our goal is not to test you on recognizing figsize=(10,8) , its to test your understanding of being able to see a requested plot, and reproducing it.**NOTE: You may need to perform extra calculations on the pandas dataframe before calling seaborn to create the plot.** -------- TASK: Recreate the Scatter Plot shown below**The scatterplot attempts to show the relationship between the days employed versus the age of the person (DAYS_BIRTH) for people who were not unemployed. Note, to reproduce this chart you must remove unemployed people from the dataset first. Also note the sign of the axis, they are both transformed to be positive. Finally, feel free to adjust the *alpha* and *linewidth* parameters in the scatterplot since there are so many points stacked on top of each other.**
###Code
# CODE HERE TO RECREATE THE PLOT SHOWN ABOVE
import warnings
warnings.simplefilter('ignore')
plt.figure(figsize=(12,8))
# REMOVE UNEMPLOYED PEOPLE
employed = df[df['DAYS_EMPLOYED']<0]
# MAKE BOTH POSITIVE
employed['DAYS_EMPLOYED'] = -1*employed['DAYS_EMPLOYED']
employed['DAYS_BIRTH'] = -1*employed['DAYS_BIRTH']
# With so many points, alpha is tiny, might be an indicated that a
# scatterplot may not be the right choice!
sns.scatterplot(y='DAYS_EMPLOYED',x='DAYS_BIRTH',data=employed,
alpha=0.01,linewidth=0)
plt.savefig('task_one.jpg')
###Output
_____no_output_____
###Markdown
TASK: Recreate the Distribution Plot shown below:**Note, you will need to figure out how to calculate "Age in Years" from one of the columns in the DF. Think carefully about this. Don't worry too much if you are unable to replicate the styling exactly.**
###Code
# CODE HERE TO RECREATE THE PLOT SHOWN ABOVE
plt.figure(figsize=(8,4))
df['YEARS'] = -1*df['DAYS_BIRTH']/365
sns.histplot(data=df,x='YEARS',linewidth=2,edgecolor='black',
color='red',bins=45,alpha=0.4)
plt.xlabel("Age in Years")
plt.savefig('DistPlot_solution.png')
###Output
_____no_output_____
###Markdown
TASK: Recreate the Categorical Plot shown below:**This plot shows information only for the *bottom half* of income earners in the data set. It shows the boxplots for each category of NAME_FAMILY_STATUS column for displaying their distribution of their total income. The hue is the "FLAG_OWN_REALTY" column. Note: You will need to adjust or only take part of the dataframe *before* recreating this plot.**
###Code
# CODE HERE
plt.figure(figsize=(12,5))
bottom_half_income = df.nsmallest(n=int(0.5*len(df)),columns='AMT_INCOME_TOTAL')
sns.boxplot(x='NAME_FAMILY_STATUS',y='AMT_INCOME_TOTAL',data=bottom_half_income,hue='FLAG_OWN_REALTY')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,title='FLAG_OWN_REALTY')
plt.title('Income Totals per Family Status for Bottom Half of Earners')
###Output
_____no_output_____
###Markdown
TASK: Recreate the Heat Map shown below: **This heatmap shows the correlation between the columns in the dataframe. You can get correlation with .corr() , also note that the FLAG_MOBIL column has NaN correlation with every other column, so you should drop it before calling .corr().**
###Code
df.corr()
sns.heatmap(df.drop('FLAG_MOBIL',axis=1).corr(),cmap="viridis")
###Output
_____no_output_____ |
cc-webgraph-statistics/comparison_domain_ranks.ipynb | ###Markdown
Comparing Common Crawl's Domain-Level Harmonic Centrality Ranks A comparison of top web sites rankings- the list of [top-1-million sites](http://s3.amazonaws.com/alexa-static/top-1m.csv.zip) published by [Alexa](https://support.alexa.com/hc/en-us/sections/200063274-Top-Sites), based on unique visitors and page views- the [Cisco Umbrella Popularity list](http://s3-us-west-1.amazonaws.com/umbrella-static/index.html) which reflects DNS usage- the [Majestic Million](http://downloads.majestic.com/majestic_million.csv), "[ordered by the number of referring subnets](https://blog.majestic.com/development/majestic-million-csv-daily/)"- the [Tranco list](https://tranco-list.eu/)- the top million domains ranked by harmonic centrality from [Common Crawls Jun/Jul/Sep 2021 webgraph dataset](https://commoncrawl.org/2021/10/host-and-domain-level-web-graphs-jun-jul-sep-2021/)The work is largely inspired by the [Tranco research paper](https://tranco-list.eu/assets/tranco-ndss19.pdf) by Victor Le Pochat, Tom Van Goethem, Samaneh Tajalizadehkhoob, Maciej Korczyński and Wouter Joosen.Caveats: the lists differ in their- notion of a web site (a host, a subdomain, a registered domain)- methodology to rank sites - Alexa: traffic / visitors - Cisco Umbrella: DNS traffic - Majestic: backlinks aggregated by IPv4 /24 subnets - Tranco: weighted combination of the above (plus Quantcast) - Common Crawl: harmonic centrality- data aggregation time: while the others provide daily updates, Common Crawl releases domain rankings 3-4 times per year List Download and Read DataA shell script to download all lists (executed on Oct 09, 2021):```bash!/bin/bashdir="ranks-2021-10-09"mkdir -p "$dir"curl "http://s3.amazonaws.com/alexa-static/top-1m.csv.zip" | gzip -dc >"$dir"/alexa.csvcurl "http://downloads.majestic.com/majestic_million.csv" >"$dir"/majestic.csvcurl "http://s3-us-west-1.amazonaws.com/umbrella-static/top-1m.csv.zip" | gzip -dc >"$dir"/umbrella.csv find latest Tranco listcurr_tranco_link=$(lynx -dump "https://tranco-list.eu/latest_list" \ | grep -Eo 'https://tranco-list.eu/download/[^/]+/1000000')curr_tranco_id=$(echo $curr_tranco_link | cut -d/ -f5 | tr -d '\n')curl $curr_tranco_link >"$dir"/tranco.csv take top 1M from latest Common Crawl domain-level rankingscurl https://commoncrawl.s3.amazonaws.com/projects/hyperlinkgraph/cc-main-2021-jun-jul-sep/domain/cc-main-2021-jun-jul-sep-domain-ranks.txt.gz | gzip -dc | head -1000001 >"$dir"/cc.tsv```
###Code
# read the data
import os
import pandas as pd
base = 'ranks-2021-10-09'
lists = {}
lists['Alexa'] = pd.read_csv(os.path.join(base, 'alexa.csv'), header=None, names=['rank', 'domain'])
lists['Umbrella'] = pd.read_csv(os.path.join(base, 'umbrella.csv'), header=None, names=['rank', 'domain'])
lists['Tranco'] = pd.read_csv(os.path.join(base, 'tranco.csv'), header=None, names=['rank', 'domain'])
# Majestic
majestic_ = pd.read_csv(os.path.join(base, 'majestic.csv'))
majestic = pd.DataFrame()
majestic[['rank', 'domain']] = majestic_[['GlobalRank', 'Domain']]
lists['Majestic'] = majestic
# Common Crawl
def reverse_host(host):
parts = host.split('.')
parts.reverse()
return '.'.join(parts)
cc = pd.read_csv(os.path.join(base, 'cc.tsv'), delimiter='\t')
cc['domain'] = cc['#host_rev'].apply(reverse_host)
cc.rename(columns={'#harmonicc_pos': 'rank'}, inplace=True)
cc.drop(columns=['#harmonicc_val', '#pr_pos', '#pr_val', '#host_rev', '#n_hosts'], inplace=True)
lists['CC'] = cc
###Output
_____no_output_____
###Markdown
Top 20 Sites
###Code
top_df = pd.DataFrame()
for l in lists:
top_df[l] = lists[l].head(20)['domain']
top_df
###Output
_____no_output_____
###Markdown
Count of Top-Level Domains among the Top-1M SitesSee also [TLDS in cc-crawl-statistics](https://commoncrawl.github.io/cc-crawl-statistics/plots/tlds) which reflect the TLD coverage in web page captures
###Code
# count TLDs
from collections import defaultdict
tlds = defaultdict(dict)
def tld(host):
return host.split('.')[-1]
for l in lists:
for idx, row in lists[l]['domain'].apply(tld).value_counts().iteritems():
tlds[l][idx] = row
tld_df = pd.DataFrame(tlds, dtype=int).fillna(0).astype(int).sort_values(['CC'], ascending=0)
tld_df.head(10)
# percentage of TLD in top-million domain lists
tld_df.apply(lambda x: 100 * x / float(x.sum())).head(20)
# percentage of continents by country-code TLD
#!pip install a-world-of-countries
import awoc
from icctld import icctld2cctld
world = awoc.AWOC()
tld_continent = {}
for continent in world.get_continents_list():
for country in world.get_countries_list_of(continent):
tld = world.get_country_TLD(country)
if tld:
tld_continent[tld] = continent
for icctld in icctld2cctld:
if icctld2cctld[icctld] in tld_continent:
tld_continent[icctld] = tld_continent[icctld2cctld[icctld]]
continents = tld_df.reset_index()
continents['Continent'] = continents['index'].apply(
lambda x: tld_continent[x] if x in tld_continent else '(generic etc.)')
continents.groupby(['Continent']).sum().apply(
lambda x: 100 * x / float(x.sum())).sort_values(['CC'], ascending=0)
###Output
_____no_output_____
###Markdown
Correlation between Ranked ListsRank-Biased Overlap (RBO), see- Webber, Moffat, Zobel, 2010: [A similarity measure for indefinite rankings](http://codalism.com/research/papers/wmz10_tois.pdf)- the [rbo](https://pypi.org/project/rbo/) Python module
###Code
#!pip install rbo
import rbo
corr_matrix = defaultdict(dict)
for n in lists:
for m in lists:
if n == m:
corr_matrix[n][m] = corr_matrix[m][n] = 1.0
continue
if n > m:
continue
nl = lists[n]['domain'].values
ml = lists[m]['domain'].values
rbo_sim = rbo.RankingSimilarity(nl, ml).rbo()
overlap = len(set(nl) & set(ml))
union = len(set(nl) | set(ml))
jaccard_sim = overlap / union
print("{:.3f}\t{:6d}\t{:7d}\t{:.3f}\t{} <> {}".format(rbo_sim, overlap, union, jaccard_sim, n, m))
corr_matrix[n][m] = corr_matrix[m][n] = rbo_sim
corr = pd.DataFrame(corr_matrix)
corr
import matplotlib.pyplot as plt
import seaborn as sb
corr_heatmap = sb.heatmap(corr, cmap="Blues", annot=True)
plt.savefig('corr_ranks_rbo.svg')
corr_heatmap
###Output
_____no_output_____
###Markdown
Comparing Common Crawl's Domain-Level Harmonic Centrality Ranks A comparison of top web sites rankings- the list of [top-1-million sites](http://s3.amazonaws.com/alexa-static/top-1m.csv.zip) published by [Alexa](https://support.alexa.com/hc/en-us/sections/200063274-Top-Sites), based on unique visitors and page views- the [Cisco Umbrella Popularity list](http://s3-us-west-1.amazonaws.com/umbrella-static/index.html) which reflects DNS usage- the [Majestic Million](http://downloads.majestic.com/majestic_million.csv), "[ordered by the number of referring subnets](https://blog.majestic.com/development/majestic-million-csv-daily/)"- the [Tranco list](https://tranco-list.eu/)- the top million domains ranked by harmonic centrality from [Common Crawls Jun/Jul/Sep 2021 webgraph dataset](https://commoncrawl.org/2021/10/host-and-domain-level-web-graphs-jun-jul-sep-2021/)The work is largely inspired by the [Tranco research paper](https://tranco-list.eu/assets/tranco-ndss19.pdf) by Victor Le Pochat, Tom Van Goethem, Samaneh Tajalizadehkhoob, Maciej Korczyński and Wouter Joosen.Caveats: the lists differ in their- notion of a web site (a host, a subdomain, a registered domain)- methodology to rank sites - Alexa: traffic / visitors - Cisco Umbrella: DNS traffic - Majestic: backlinks aggregated by IPv4 /24 subnets - Tranco: weighted combination of the above (plus Quantcast) - Common Crawl: harmonic centrality- data aggregation time: while the others provide daily updates, Common Crawl releases domain rankings 3-4 times per year List Download and Read DataA shell script to download all lists (executed on Oct 09, 2021):```bash!/bin/bashdir="ranks-2021-10-09"mkdir -p "$dir"curl "http://s3.amazonaws.com/alexa-static/top-1m.csv.zip" | gzip -dc >"$dir"/alexa.csvcurl "http://downloads.majestic.com/majestic_million.csv" >"$dir"/majestic.csvcurl "http://s3-us-west-1.amazonaws.com/umbrella-static/top-1m.csv.zip" | gzip -dc >"$dir"/umbrella.csv find latest Tranco listcurr_tranco_link=$(lynx -dump "https://tranco-list.eu/latest_list" \ | grep -Eo 'https://tranco-list.eu/download/[^/]+/1000000')curr_tranco_id=$(echo $curr_tranco_link | cut -d/ -f5 | tr -d '\n')curl $curr_tranco_link >"$dir"/tranco.csv take top 1M from latest Common Crawl domain-level rankingscurl https://data.commoncrawl.org/projects/hyperlinkgraph/cc-main-2021-jun-jul-sep/domain/cc-main-2021-jun-jul-sep-domain-ranks.txt.gz | gzip -dc | head -1000001 >"$dir"/cc.tsv```
###Code
# read the data
import os
import pandas as pd
base = 'ranks-2021-10-09'
lists = {}
lists['Alexa'] = pd.read_csv(os.path.join(base, 'alexa.csv'), header=None, names=['rank', 'domain'])
lists['Umbrella'] = pd.read_csv(os.path.join(base, 'umbrella.csv'), header=None, names=['rank', 'domain'])
lists['Tranco'] = pd.read_csv(os.path.join(base, 'tranco.csv'), header=None, names=['rank', 'domain'])
# Majestic
majestic_ = pd.read_csv(os.path.join(base, 'majestic.csv'))
majestic = pd.DataFrame()
majestic[['rank', 'domain']] = majestic_[['GlobalRank', 'Domain']]
lists['Majestic'] = majestic
# Common Crawl
def reverse_host(host):
parts = host.split('.')
parts.reverse()
return '.'.join(parts)
cc = pd.read_csv(os.path.join(base, 'cc.tsv'), delimiter='\t')
cc['domain'] = cc['#host_rev'].apply(reverse_host)
cc.rename(columns={'#harmonicc_pos': 'rank'}, inplace=True)
cc.drop(columns=['#harmonicc_val', '#pr_pos', '#pr_val', '#host_rev', '#n_hosts'], inplace=True)
lists['CC'] = cc
###Output
_____no_output_____
###Markdown
Top 20 Sites
###Code
top_df = pd.DataFrame()
for l in lists:
top_df[l] = lists[l].head(20)['domain']
top_df
###Output
_____no_output_____
###Markdown
Count of Top-Level Domains among the Top-1M SitesSee also [TLDS in cc-crawl-statistics](https://commoncrawl.github.io/cc-crawl-statistics/plots/tlds) which reflect the TLD coverage in web page captures
###Code
# count TLDs
from collections import defaultdict
tlds = defaultdict(dict)
def tld(host):
return host.split('.')[-1]
for l in lists:
for idx, row in lists[l]['domain'].apply(tld).value_counts().iteritems():
tlds[l][idx] = row
tld_df = pd.DataFrame(tlds, dtype=int).fillna(0).astype(int).sort_values(['CC'], ascending=0)
tld_df.head(10)
# percentage of TLD in top-million domain lists
tld_df.apply(lambda x: 100 * x / float(x.sum())).head(20)
# percentage of continents by country-code TLD
#!pip install a-world-of-countries
import awoc
from icctld import icctld2cctld
world = awoc.AWOC()
tld_continent = {}
for continent in world.get_continents_list():
for country in world.get_countries_list_of(continent):
tld = world.get_country_TLD(country)
if tld:
tld_continent[tld] = continent
for icctld in icctld2cctld:
if icctld2cctld[icctld] in tld_continent:
tld_continent[icctld] = tld_continent[icctld2cctld[icctld]]
continents = tld_df.reset_index()
continents['Continent'] = continents['index'].apply(
lambda x: tld_continent[x] if x in tld_continent else '(generic etc.)')
continents.groupby(['Continent']).sum().apply(
lambda x: 100 * x / float(x.sum())).sort_values(['CC'], ascending=0)
###Output
_____no_output_____
###Markdown
Correlation between Ranked ListsRank-Biased Overlap (RBO), see- Webber, Moffat, Zobel, 2010: [A similarity measure for indefinite rankings](http://codalism.com/research/papers/wmz10_tois.pdf)- the [rbo](https://pypi.org/project/rbo/) Python module
###Code
#!pip install rbo
import rbo
corr_matrix = defaultdict(dict)
for n in lists:
for m in lists:
if n == m:
corr_matrix[n][m] = corr_matrix[m][n] = 1.0
continue
if n > m:
continue
nl = lists[n]['domain'].values
ml = lists[m]['domain'].values
rbo_sim = rbo.RankingSimilarity(nl, ml).rbo()
overlap = len(set(nl) & set(ml))
union = len(set(nl) | set(ml))
jaccard_sim = overlap / union
print("{:.3f}\t{:6d}\t{:7d}\t{:.3f}\t{} <> {}".format(rbo_sim, overlap, union, jaccard_sim, n, m))
corr_matrix[n][m] = corr_matrix[m][n] = rbo_sim
corr = pd.DataFrame(corr_matrix)
corr
import matplotlib.pyplot as plt
import seaborn as sb
corr_heatmap = sb.heatmap(corr, cmap="Blues", annot=True)
plt.savefig('corr_ranks_rbo.svg')
corr_heatmap
###Output
_____no_output_____ |
module4-classification-metrics/LS_DS_224.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
If you have matplotlib version 3.1.1 then seaborn heatmaps will be cut offBecause of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This code checks your matplotlib version:
###Code
import matplotlib
print(matplotlib.__version__)
###Output
3.1.2
###Markdown
If you have version 3.1.1, you can downgrade if you want, but you don't have to, I just want you to be aware of the issue. Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn's [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.htmlconfusion-matrix) function just returns a matrix of numbers, which is hard to read.Scikit-learn docs have an example [plot_confusion_matrix](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) function. The output looks good, but the code is long and hard to understand. It's written just with numpy and matplotlib.We can write our own function using pandas and seaborn. The code will be shorter and easier to understand. Let's write the function iteratively.
###Code
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation="vertical",
values_format=".0f");
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format=".0f", xticks_rotation="vertical")
cm;
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format=".0f", xticks_rotation='vertical')
cm;
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 / (6560)
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
# Discrete predictions
pipeline.predict(X_val)
# Predicted probabilities
pipeline.predict_proba(X_val)
# predicted probabilities for positive class
pipeline.predict_proba(X_val)[:, 1]
# make prob into discrete using a threshold
pipeline.predict_proba(X_val)[:, 1] > .5
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
pd.Series((pipeline.predict_proba(X_val)[:, 1] > .925)).value_counts()
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
pd.Series(pipeline.predict_proba(X_val)[:, 1]).reset_index(drop=True).sort_values(ascending=False).iloc[1999]
pd.Series(pipeline.predict_proba(X_val)[:, 1] > .92).value_counts()
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
y_pred_proba = pipeline.predict_proba(X_val)[:, -1]
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
y_pred_proba = pipeline.predict_proba(X_val)[:, -1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv', na_values=[0, -2.0000e-08]),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8096531550355203
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
###Output
_____no_output_____
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy = 'mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_val, y_pred))
from sklearn.utils.multiclass import unique_labels
unique_labels(y_val)
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Acutal {label}' for label in labels]
return columns, index
plot_confusion_matrix(y_val, y_pred)
def confusion_matrix_table(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return table
table = confusion_matrix_table(y_val, y_pred)
table
import seaborn as sns
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Acutal {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns = columns, index = index)
# annot = True 做什麼?
return sns.heatmap(table, annot = True, fmt = 'd', cmap = 'viridis')
plot_confusion_matrix(y_val, y_pred);
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f',
xticks_rotation = 'vertical');
plot_confusion_matrix?
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct = table['Predicted functional']['Actual functional'] + table['Predicted functional needs repair']['Actual functional needs repair'] + table['Predicted non functional']['Actual non functional']
correct
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total = table['Predicted functional'].values.sum() + table['Predicted functional needs repair'].values.sum() + table['Predicted non functional'].values.sum()
total
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
print(round((correct / total) * 100, 2), '%')
###Output
81.4 %
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f',
xticks_rotation = 'vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = table['Predicted non functional']['Actual non functional']
correct_predictions_nonfunctional
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
incorrect_predictions_nonfunctional = table['Predicted non functional']['Actual functional'] + table['Predicted non functional']['Actual functional needs repair']
incorrect_predictions_nonfunctional
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / (correct_predictions_nonfunctional + incorrect_predictions_nonfunctional)
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
print(classification_report(y_val, y_pred))
actual_nonfunctional = 5517
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize = True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize = True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f',
xticks_rotation = 'vertical');
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
### Discrete predictions look like this...
print(pipeline.predict(X_val))
### Predicted probabilities look like this...
print(pipeline.predict_proba(X_val))
### Predicted probabilities *for the positive class* ...
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
print(y_pred_proba)
### Make predicted probabilities into discrete predictions, using a "threshold"
print(pipeline.predict_proba(X_val)[:, 1] >= 0.5)
# Amazin'
sns.distplot(y_pred_proba);
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = 0.92
y_pred_probability = (pipeline.predict_proba(X_val)[:, 1] >= threshold)
ax = sns.distplot(pipeline.predict_proba(X_val)[:, 1] )
ax.axvline(threshold, color = 'red')
pd.Series(y_pred).value_counts()
thresholds = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.9]
for threshold in thresholds:
y_pred_probability = (pipeline.predict_proba(X_val)[:, 1] >= threshold)
print('Threshold: ', threshold)
print(pd.Series(y_pred_probability).value_counts(), '\n')
###Output
Threshold: 0.1
True 12457
False 1901
dtype: int64
Threshold: 0.2
True 10663
False 3695
dtype: int64
Threshold: 0.3
True 8979
False 5379
dtype: int64
Threshold: 0.4
True 7431
False 6927
dtype: int64
Threshold: 0.5
False 8224
True 6134
dtype: int64
Threshold: 0.6
False 9301
True 5057
dtype: int64
Threshold: 0.7
False 10187
True 4171
dtype: int64
Threshold: 0.9
False 11887
True 2471
dtype: int64
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val' : y_val,
'y_pred_probability' : y_pred_probability})
results.head()
top2000 = results.sort_values(by = 'y_pred_probability',
ascending = False)[: 2000]
top2000.head()
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n = 50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {int(trips * (46 / 100))} waterpump repairs in {trips}')
relevant_recommendations = top2000.y_val.sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920 waterpump repairs in 2000
With model: Predict 1966 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k = 2000:', precision_at_k_2000)
###Output
Precision @ k = 2000: 0.983
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation=90,
values_format='.0f');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 + 4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = len(val)
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
accuracy_score(y_val, y_pred)
sum(y_pred == y_val)
sum(y_pred == y_val) / len(y_val)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
cm;
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfunctional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / total_predictions_nonfunctional
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
# Discrete predictions look like this...
pipeline.predict(X_val)
# Predicted probabilities look like this...
pipeline.predict_proba(X_val)
# Predicted probabilities *for the positive class* ...
pipeline.predict_proba(X_val)[:, 1]
# Make predicted probabilities into discrete predictions,
# using a "threshold"
threshold = 0.5
pipeline.predict_proba(X_val)[:, 1] > threshold
# Distribution of these predicted probabilities
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_proba);
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
# only predict true if we are 92% sure or more
threshold = 0.92
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
pd.Series(y_pred).value_counts()
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k=2000', precision_at_k_2000)
###Output
Precision @ k=2000 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.plot([0,1], [0,1], color='red');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation = 'vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 +4351
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_prediction = 7005+171 +622+555
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
classification_accuracy = correct_predictions/total_prediction
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation = 'vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
corect_pred_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
Total_pred_nonfunctional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
precision = corect_pred_nonfunctional/Total_pred_nonfunctional
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
recall = corect_pred_nonfunctional/actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
/usr/local/lib/python3.7/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
total_pred_True = 5032+977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict(X_val)
pipeline.predict_proba(X_val)
#predited probabilities for the positive class
pipeline.predict_proba(X_val)[:,1]
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
#make predicted probabilities into discreet prediction using a thresholsd
threshold = 0.5
pipeline.predict_proba(X_val)[:,1] > threshold
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_proba);
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
threshold = 0.92
y_pre = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color = 'red')
pd.Series(y_pred).value_counts()
results = pd.DataFrame({'y_val':y_val,'y_pred_proba': y_pred_proba})
results
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000 = results.sort_values(by = 'y_pred_proba', ascending = False)
top2000.head()
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 6560 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations/trips
print('Precision @ k = 2000', precision_at_k_2000)
###Output
Precision @ k = 2000 3.28
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
#Getting Kaggle submissions
y_pred = pd.DataFrame(pipeline.predict(X_test))
submission = y_pred
submission.set_index(X_test.index, inplace=True)
submission.rename(columns={0:'status_group'}, inplace=True)
submission.head()
submission.to_csv('classroom_best.csv')
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
y_val.size
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.1f', xticks_rotation=45);
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
11688
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
81.4%
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.1f', xticks_rotation=45);
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_non_functional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_non_functional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
precision_non_functional = correct_predictions_non_functional / total_predictions_non_functional
precision_non_functional
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_non_functional = 4351+68+1098
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
recall_non_functional = correct_predictions_non_functional / actual_non_functional
recall_non_functional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
pipeline.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation=45);
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict(X_val)
pipeline.predict_proba(X_val)[:,1]
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.BinaryEncoder(),
SimpleImputer(strategy='mean'),
LogisticRegression()
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_proba)
###Output
/opt/anaconda3/envs/unit_2/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Change the threshold
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
threshold = 0.92
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color = 'red')
pd.Series(y_pred).value_counts()
###Output
/opt/anaconda3/envs/unit_2/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results= pd.DataFrame({'y_val': y_val, 'y_pred': y_pred_proba})
top_2000 = results.sort_values(by='y_pred', ascending=False)[:2000]
top_2000
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top_2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
relevant_recommendation = top_2000['y_val'].sum()
relevant_recommendation
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendation / top_2000['y_val'].size
precision_at_2000
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
BloomTech Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
If you have matplotlib version 3.1.1 then seaborn heatmaps will be cut offBecause of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This code checks your matplotlib version:
###Code
import matplotlib
print(matplotlib.__version__)
###Output
3.1.2
###Markdown
If you have version 3.1.1, you can downgrade if you want, but you don't have to, I just want you to be aware of the issue. Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
y_pred = pipeline.predict(X_test)
submission = sample_submission.copy()
submission[target] = y_pred
submission.to_csv('lecture_predictions.csv', index=False)
from google.colab import files
files.download('lecture_predictions.csv')
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn's [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.htmlconfusion-matrix) function just returns a matrix of numbers, which is hard to read.Scikit-learn docs have an example [plot_confusion_matrix](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) function. The output looks good, but the code is long and hard to understand. It's written just with numpy and matplotlib.We can write our own function using pandas and seaborn. The code will be shorter and easier to understand. Let's write the function iteratively.
###Code
# check version
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.0f');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 + 4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = 7005 + 171 + 622 + 555 + 332 + 156 + 1098 + 68 + 4351
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
pipeline.score(X_val, y_val)
y_pred = pipeline.predict(X_val)
sum(y_pred == y_val) / len(y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
cr = classification_report(y_val, y_pred)
print(cr)
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
cm;
print(cr)
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many correct predictions of "non functional"?
###Code
# correct predictions for class
# bottom right corner
correct_non_functional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
# total predictions for class
# sum predictions for class (3rd column)
total_non_functional = 622 + 156 + 4351
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
# correct predictions for class / total predictions for class
# bottom right corner / sum predictions for class (3rd column)
precision_non_functional = correct_non_functional / total_non_functional
print(f'{precision_non_functional:.2f}')
###Output
0.85
###Markdown
How many actual "non functional" waterpumps?
###Code
# actual occurence of class
# sum true for class (3rd row)
actual_non_functional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
# correct predictions for class / actual occurence of class
# bottom right corner / sum true for class (3rd row)
recall_non_functional = correct_non_functional / actual_non_functional
print(f'{recall_non_functional:.2f}')
###Output
0.79
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
total_predictions = 5032 + 977
total_predictions
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
# discrete prdictions
pipeline.predict(X_val)
# predicted probabilities
pipeline.predict_proba(X_val)
# predicted probabilities for positive class
pipeline.predict_proba(X_val)[:, 1]
# make predicted probabilities into discrete predictions
# apply a threshold
pipeline.predict_proba(X_val)[:, 1] > 0.5
# change the threshold
pipeline.predict_proba(X_val)[:, 1] > 0.7
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_proba);
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = 0.92
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
pd.Series(y_pred).value_counts()
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val':y_val, 'y_pred_proba':y_pred_proba})
results
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'baseline: {trips * .46} waterpump repairs in {trips} trips.')
relevant_recommendations = top2000['y_val'].sum()
print(f'with model: predict {relevant_recommendations} waterpump repairs in {trips} trips.')
###Output
baseline: 920.0 waterpump repairs in 2000 trips.
with model: predict 1972 waterpump repairs in 2000 trips.
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
1972 / 2000
precision_at_k_2000 = relevant_recommendations / trips
print(f'precision at k={trips}: {precision_at_k_2000}')
###Output
precision at k=2000: 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline,X_val,y_val,values_format='.0f',xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions=7005+332+4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions=7005+555+1098+68+4351+332+156+662+17
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
accuracy_score(y_val,y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val,y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline,X_val,y_val,values_format='.0f',xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfuncitonal = 4351+156+622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional/total_predictions_nonfuncitonal
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional=1098+68+4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline,X_val,y_val,values_format='.0f',xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
1528+5032
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict(X_val)
pipeline.predict_proba(X_val)
#Predicted probabilities for the postivie class(need a trip,non functional or functional that need repair)
pipeline.predict_proba(X_val)[:,1]
#Turn predicted probabilities into discrete predictions using a "threshold"
threshold=0.5
pipeline.predict_proba(X_val)[:,1] > threshold
import searborn as sns
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold,color='red')
pd.Series(y_pred).value_counts()
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
threshold=0.92
y_pred = y_pred_proba > threshold
import seaborn as sns
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold,color='red')
pd.Series(y_pred).value_counts()
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val':y_val,'y_pred_proba':y_pred_proba})
results
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000 = results.sort_values(by='y_pred_proba',ascending=False)[0:2000]
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline:{trips * 0.45} waterpump repairs in {trips} trips')
relevent_recommendations = top2000['y_val'].sum()
print(f'With model: Predict{relevent_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline:900.0 waterpump repairs in 2000 trips
With model: Predict1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevent_recommendations / trips
print(f'Precision at k = 2000: {precision_at_k_2000}')
###Output
Precision at k = 2000: 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
**Recap:** ROC AUC measures how well a classifier ranks predicted probabilities. So, when you get your classifier’s ROC AUC score, you need to use predicted probabilities, not discrete predictions. Your code may look something like this:```pythonfrom sklearn.metrics import roc_auc_scorey_pred_proba = model.predict_proba(X_test_transformed)[:, -1] Probability for last classprint('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))```ROC AUC ranges from 0 to 1. Higher is better. A naive majority class baseline will have an ROC AUC score of 0.5.
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
# import category_encoders as ce
from category_encoders import OneHotEncoder, OrdinalEncoder
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, plot_confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, StandardScaler
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# drop repeat rows(observations)
X.drop_duplicates(inplace=True)
# drop repeat cols (features)
# X.drop(columns=['quantity_group', 'extraction_type_group'], inplace=True)
# drop constant features
# X.drop(columns=['recorded_by'], inplace=True)
# feature engineering
# X['pump_age'] = X['date_recorded'].dt.year - X['construction_year']
# X['date_recorded'] = X['date_recorded'].dt.year
# drop datetime col
# X.drop(columns='date_recorded', inplace=True)
# drop high-cardinality
drop_cols = [col for col in X.select_dtypes('object').columns if X[col].nunique() > 100]
X.drop(columns=drop_cols, inplace=True)
# drop features with lots of NaN values
X.dropna(axis=1, thresh=len(X)*0.8, inplace=True)
df['needs_repair'] = df['status_group'].apply(lambda x: 0 if x == 'functional' else 1)
df.drop(columns='status_group', inplace=True)
# # Convert date_recorded to datetime
# X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# # Extract components from date_recorded, then drop the original column
# X['year_recorded'] = X['date_recorded'].dt.year
# X['month_recorded'] = X['date_recorded'].dt.month
# X['day_recorded'] = X['date_recorded'].dt.day
# X = X.drop(columns='date_recorded')
# # Engineer feature: how many years from construction_year to date_recorded
# X['years'] = X['year_recorded'] - X['construction_year']
# # Drop recorded_by (never varies) and id (always varies, random)
# unusable_variance = ['recorded_by', 'id']
# X = X.drop(columns=unusable_variance)
# # Drop duplicate columns
# duplicate_columns = ['quantity_group']
# X = X.drop(columns=duplicate_columns)
# # About 3% of the time, latitude has small values near zero,
# # outside Tanzania, so we'll treat these like null values
# X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# # When columns have zeros and shouldn't, they are like null values
# cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
# for col in cols_with_zeros:
# X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
df = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv', parse_dates=['date_recorded'], na_values=[0, -2.000000e-08]),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')).set_index('id')
df = wrangle(train)
# Read test_features.csv & sample_submission.csv
X_test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv', parse_dates=['date_recorded'], na_values=[0, -2.000000e-08], index_col='id')
X_test = wrangle(test)
# sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
# target = 'status_group'
# train, val = train_test_split(train, test_size=len(test),
# stratify=train[target], random_state=42)
# # Arrange data into X features matrix and y target vector
# X_train = train.drop(columns=target)
# y_train = train[target]
# X_val = val.drop(columns=target)
# y_val = val[target]
# X_test = test
# # Make pipeline!
# pipeline = make_pipeline(
# FunctionTransformer(wrangle, validate=False),
# ce.OrdinalEncoder(),
# SimpleImputer(strategy='mean'),
# RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
# )
# # Fit on train, score on val
# pipeline.fit(X_train, y_train)
# y_pred = pipeline.predict(X_val)
# print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
EDA * How can we transform our target so that this is a binary classification problem?
###Code
# print(df.info())
# print(df.shape)
X_test.head()
df.head()
# df['status_group'].value_counts()
# np.where(df['status_group'] == 'functional', 0, 1)
# df['needs_repair'] = df['status_group'].apply(lambda x: 0 if x == 'functional' else 1)
###Output
_____no_output_____
###Markdown
Split Data
###Code
# split tv / fm
target = 'needs_repair'
y = df[target]
X = df.drop(columns=target)
# train-val split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# sanity check
assert len(X_train) + len(X_val) == len(X)
###Output
_____no_output_____
###Markdown
Baseline
###Code
print('Baseline Accuracy:', y_train.value_counts(normalize=True).max())
###Output
Baseline Accuracy: 0.5437530959919019
###Markdown
Build Model* `OrdinalEncoder`* `SimpleImputer`* `RandomForestClassifier`
###Code
model = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(
n_estimators=50,
n_jobs=-1,
random_state=42
)
)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Check Metrics**Accuracy**
###Code
print('Training Accuracy:', model.score(X_train, y_train))
print('Validation Accuracy:', model.score(X_val, y_val))
###Output
Training Accuracy: 0.9982554758674161
Validation Accuracy: 0.8095279117849759
###Markdown
**Confusion Matrix**
###Code
plot_confusion_matrix(
model,
X_val, # use validation data only
y_val,
values_format='.0f',
display_labels=['no repair needed', 'needs repair']
)
###Output
C:\Users\jeffkang\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator.
warnings.warn(msg, category=FutureWarning)
###Markdown
**Recall:** Of those pumps that actually needed repair, what proportion did you correctly predict as needing repair?
###Code
print('Recall Score:', 4042 / (4042 + 1262))
###Output
Recall Score: 0.7620663650075414
###Markdown
**Precision:** Of all the pumps that you predicted as needing repair, what proportion actually needed repair?
###Code
print('Precision Score:', 4042 / (4042 + 949))
###Output
Precision Score: 0.8098577439390904
###Markdown
**Classification Report**
###Code
print(classification_report(y_val, model.predict(X_val), target_names=['no repair needed', 'needs repair']))
###Output
precision recall f1-score support
no repair needed 0.81 0.85 0.83 6304
needs repair 0.81 0.76 0.79 5304
accuracy 0.81 11608
macro avg 0.81 0.81 0.81 11608
weighted avg 0.81 0.81 0.81 11608
###Markdown
Case StudyLet's say that it costs the Tanzanian goverment $100 to inspect a water pump, and there is only funding for 2000 pumps inspections
###Code
n_inspections = 2000
###Output
_____no_output_____
###Markdown
Scenario 1: Choose pumps randomly
###Code
repair_prob = y_val.value_counts(normalize=True).min()
print('Inspections conducted:', n_inspections)
print('Pumps repaired:', n_inspections * (1-repair_prob) * 100)
###Output
Inspections conducted: 2000
Pumps repaired: 108614.74844934528
###Markdown
Scenario 2: Using our model "out of the box"
###Code
data = {'y_val': y_val, 'y_pred': model.predict(X_val)}
results = pd.DataFrame(data)
mask = results['y_pred'] == 1
sample = results[mask].sample(n_inspections)
sample.head()
print('Inspections conducted:', n_inspections)
print('Pumps repaired:', sample['y_val'].sum())
print('Funds wasted:', (n_inspections - sample.y_val.sum()) * 100)
###Output
Inspections conducted: 2000
Pumps repaired: 1642
Funds wasted: 35800
###Markdown
Scenario 3: We emphasize **precision** in our model, and select only select pumps that our model is very certain (>0.9) need repair.
###Code
data = {'y_val': y_val, 'y_pred_proba': model.predict_proba(X_val)[:,-1]}
results = pd.DataFrame(data)
display(results.head())
threshold = 0.9
mask = results['y_pred_proba'] > threshold
sample = results[mask].sample(n_inspections)
print('Inspections conducted:',n_inspections)
print('Pumps repaired:',sample['y_val'].sum())
print('Funds wasted:',(n_inspections - sample.y_val.sum()) * 100)
###Output
_____no_output_____
###Markdown
Interlude: Beware of leakageIf you leave 'status_group' in your feature matrix, you'll have **leakage** Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
If you have matplotlib version 3.1.1 then seaborn heatmaps will be cut offBecause of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This code checks your matplotlib version:
###Code
import matplotlib
print(matplotlib.__version__)
###Output
_____no_output_____
###Markdown
If you have version 3.1.1, you can downgrade if you want, but you don't have to, I just want you to be aware of the issue. Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn's [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.htmlconfusion-matrix) function just returns a matrix of numbers, which is hard to read.Scikit-learn docs have an example [plot_confusion_matrix](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) function. The output looks good, but the code is long and hard to understand. It's written just with numpy and matplotlib.We can write our own function using pandas and seaborn. The code will be shorter and easier to understand. Let's write the function iteratively. How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import sklearn
print(sklearn.__version__)
###Output
0.22.2.post1
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
from matplotlib import pyplot as plt
plt.rcParams['figure.dpi'] = 300
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 + 4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = 7005 + 171 + 622 + 555 + 332 + 156 + 1098 + 68 + 4351
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
accuracy_score(y_val, y_pred)
y_val.shape, y_pred.shape
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
# really dumb that vscode doesn't import the labels
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
cm;
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfunctional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / total_predictions_nonfunctional
print(classification_report(y_val, y_pred)) # for reference to check our math
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict(X_val)
# What do the predicted probabilities look like?
#
pipeline.predict_proba(X_val)
pipeline.predict_proba(X_val)[:, 1]
pipeline.predict_proba(X_val)[:, 0]
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
import seaborn as sns
threshold = 0.8
# pipeline.predict_proba(X_val)[:, 1] > threshold
y_pred_prob = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_prob)
plt.axvline(threshold, color='red')
sum(pipeline.predict_proba(X_val)[:, 1] > threshold)
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_prob': y_pred_prob})
results
top2000 = results.sort_values(by='y_pred_prob', ascending=False)[:2000]
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k=2000', precision_at_k_2000)
###Output
Precision @ k=2000 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_prob)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_prob)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.0f', cmap='Blues');
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.2f', cmap='Blues', normalize='true');
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_val, y_pred)
cm
normalize_cm = cm/cm.sum(axis=1)[:, np.newaxis]
normalize_cm
cm.sum(axis=1)[:, np.newaxis].shape
import seaborn as sns
from sklearn.utils.multiclass import unique_labels
cols = unique_labels(y_val)
df_cm = pd.DataFrame(cm, columns=cols, index=cols)
plt.figure(figsize=(10,7))
sns.heatmap(df_cm, annot=True, cmap='Blues', fmt='.0f');
cols = unique_labels(y_val)
df_cm = pd.DataFrame(normalize_cm, columns=cols, index=cols)
plt.figure(figsize=(10,7))
sns.heatmap(df_cm, annot=True, cmap='Blues', fmt='.2f');
def plot_cm(y_val, y_pred, normalize=False):
cols = unique_labels(y_val)
cm = confusion_matrix(y_val, y_pred)
if normalize:
cm = cm/cm.sum(axis=1)[:, np.newaxis]
fmt = '.2f'
else:
fmt = '.0f'
df_cm = pd.DataFrame(cm, columns=['Predicted' + str(col) for col in cols], index=['Actual' + str(col) for col in cols])
plt.figure(figsize=(10,7))
sns.heatmap(df_cm, annot=True, cmap='Blues', fmt=fmt);
plot_cm(y_val, y_pred)
plot_cm(y_val, y_pred, normalize=True)
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
7005 + 332 + 4351
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
len(y_val)
cm.sum()
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
11688/14358
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
y_pred_baseline = ['functional'] * len(y_pred)
print(classification_report(y_val, y_pred_baseline))
###Output
precision recall f1-score support
functional 0.54 1.00 0.70 7798
functional needs repair 0.00 0.00 0.00 1043
non functional 0.00 0.00 0.00 5517
accuracy 0.54 14358
macro avg 0.18 0.33 0.23 14358
weighted avg 0.29 0.54 0.38 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_cm(y_val, y_pred)
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many correct predictions of "non functional"?
###Code
correct_pred_non_func = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_pred_non_func = 622 + 156 + 4351
total_pred_non_func
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_pred_non_func/total_pred_non_func
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_non_func = 1098 + 68 + 4351
actual_non_func
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_pred_non_func/actual_non_func
#f1 score
2* (.848*.788)/(.848 + .788)
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_cm(y_val, y_pred)
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
y_pred.shape
y_pred.sum()
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict_proba(X_val)
pipeline.predict(X_val)
pipeline.predict_proba(X_val)[:,1]
pipeline.predict_proba(X_val)[:,1] > 0.5
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_proba);
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
thres = 0.92
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
pd.Series(y_pred).value_counts()
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
def set_thres(y_val, y_pred_proba, thres=0.5):
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
plt.show()
print(classification_report(y_val, y_pred))
plot_cm(y_val, y_pred)
set_thres(y_pred, y_pred_proba)
#total predictions
y_pred.sum()
from ipywidgets import interact, fixed
interact(set_thres, y_val=fixed(y_val), y_pred_proba=fixed(y_pred_proba), thres=(0,1, 0.1))
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
top2000.sample(50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
print(f'Baseline: {2000*.46} waterpump repairs in 2000 trips')
print(f'Model: {top2000["y_val"].sum()} waterpump repair in 2000 trips')
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
relevant_recom = top2000["y_val"].sum()
n_trips = 2000
precision_at_K = relevant_recom/n_trips
precision_at_K
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val,
values_format='.0f', xticks_rotation='vertical', cmap='Blues')
plot_confusion_matrix(pipeline, X_val, y_val,
normalize='true',
values_format='.2f',
xticks_rotation='vertical', cmap='Blues')
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_val, y_pred)
cm
cm.sum(axis=1)
cm.sum(axis=1)[:, np.newaxis]
normalized_cm = cm/cm.sum(axis=1)[:, np.newaxis]
normalized_cm
import seaborn as sns
from sklearn.utils.multiclass import unique_labels
def plot_cm(y_val, y_pred, normalize=False):
cols = unique_labels(y_val)
cm = confusion_matrix(y_val, y_pred)
if normalize:
cm = cm/cm.sum(axis=1)[:, np.newaxis]
fmt = '.2f'
else:
fmt = '.0f'
df_cm = pd.DataFrame(cm, columns = ['Predicted ' + str(col) for col in cols],
index = ['Actual ' + str(col) for col in cols])
plt.figure(figsize=(10,8))
sns.heatmap(df_cm, annot=True, cmap='Blues', fmt=fmt)
plot_cm(y_val, y_pred, normalize=True)
unique_labels(y_val)
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
7005 + 332 + 4351
np.diag(cm).sum()
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
len(y_val)
cm.sum()
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
(7005 + 332 + 4351)/len(y_pred)
np.diag(cm).sum()/cm.sum()
from sklearn.metrics import accuracy_score
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_cm(y_val, y_pred)
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_pred_non_func = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_pred_non_func = 4351 + 156 + 622
total_pred_non_func
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
precision_non_func = correct_pred_non_func/total_pred_non_func
precision_non_func
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_non_func = 1098 + 68 + 4351
actual_non_func
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
recall_non_func = correct_pred_non_func/actual_non_func
recall_non_func
print(classification_report(y_val, y_pred))
f1_score_non_func = 2*(precision_non_func*recall_non_func)/(precision_non_func + recall_non_func)
f1_score_non_func
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_cm(y_val, y_pred)
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
y_pred
5032+977
y_pred.sum()
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict_proba(X_val)
pipeline.predict(X_val)
pipeline.predict_proba(X_val)[:, 1] > 0.5
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_proba)
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
thres = 0.9
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
pd.Series(y_pred).value_counts()
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
def set_thres(y_true, y_pred_proba, thres=0.5):
y_pred = y_pred_proba > thres
ax = sns.distplot(y_pred_proba)
ax.axvline(thres, color='red')
plt.show()
print(classification_report(y_true, y_pred))
plot_cm(y_true, y_pred)
set_thres(y_val, y_pred_proba, thres=0.6)
from ipywidgets import interact, fixed
interact(set_thres,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
thres=(0, 1, 0.05))
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
n_trips = 2000
print(f'Baseline: {n_trips*0.46} waterpump repairs in {n_trips} trips')
print(f"With model: {top2000['y_val'].sum()} waterpump repairs in {n_trips} trips")
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_K = top2000['y_val'].sum()/n_trips
print(f'Precision @ K=2000: {precision_at_K}')
###Output
Precision @ K=2000: 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
y_pred
roc_auc_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
the heatmap shows the balance of numbers. The important thing to focuse on is the diagonal. 6821 True predictions. Anything off the diagonal is where our model is incorrect or making wrong predictions. How many correct predictions were made?
###Code
correct_predictions = 7005 + 5032
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
# total predictions is the total numbers in the box
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
#correct/total
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
y_pred
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cr = classification_report(y_val, y_pred)
print(cr)
#precision = true_positives/ (true_positives + false_positives)
#recall = true_positives / (true_positives + false_negatives)
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfunctional = 622 + 156 + 4351
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / total_predictions_nonfunctional
# same as precision for nonfuncitonal
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
recall = correct_predictions_nonfunctional / actual_nonfunctional
recall
print(cr)
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
# baseline of inspections
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format ='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict_proba(X_val)
# array of the predicted list
pipeline.predict(X_val)
# predicted probabilites for the positive class
pipeline.predict_proba(X_val)[:, 1]
# predicted true values
threshold = 0.5
sum(pipeline.predict_proba(X_val)[:, 1] > threshold)
# Those numbers are just the percent of individual trees that predicted functional or nonfunctional?
threshold = 0.925
sum(pipeline.predict_proba(X_val)[:, 1] > threshold)
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
ax = sns.distplot(y_pred_proba)
threshold = .5
ax.axvline(threshold, color='red');
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
# roc shows graphical represetnation between true positive rate and false positive rate
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
y_val.shape, y_pred_proba.shape
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct = 7005 + 332 + 4351
correct
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total = 7005+171+622+555+332+156+1098+68+4351
total
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct / total
accuracy_score(y_val, y_pred)
sum(y_pred == y_val) / len(y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f',xticks_rotation='vertical')
cm
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
corr_non = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_non = 622+156+4351
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
corr_non / total_non
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_non = 1098+68+4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
corr_non / actual_non
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict_proba(X_val)
pipeline.predict(X_val)
pipeline.predict_proba(X_val)[:,1]
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = 0.9
sum(pipeline.predict_proba(X_val)[:,1] > threshold)
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
ax = sns.distplot(y_pred_proba)
threshold = 0.9
ax.axvline(threshold, color='red')
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
**Recap:** ROC AUC measures how well a classifier ranks predicted probabilities. So, when you get your classifier’s ROC AUC score, you need to use predicted probabilities, not discrete predictions. Your code may look something like this:```pythonfrom sklearn.metrics import roc_auc_scorey_pred_proba = model.predict_proba(X_test_transformed)[:, -1] Probability for last classprint('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))```ROC AUC ranges from 0 to 1. Higher is better. A naive majority class baseline will have an ROC AUC score of 0.5.
###Code
from sklearn.metrics import roc_auc_score
y_pred_proba = model.predict_proba(X_test_transformed)[:, -1] # Probability for last class
print('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22! How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) How many correct predictions of "non functional"? How many total predictions of "non functional"? What's the precision for "non functional"? How many actual "non functional" waterpumps? What's the recall for "non functional"? Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix: How many total predictions of "True" ("non functional" or "functional needs repair") ? We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution Change the threshold Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
If you have matplotlib version 3.1.1 then seaborn heatmaps will be cut offBecause of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This code checks your matplotlib version:
###Code
import matplotlib
print(matplotlib.__version__)
###Output
3.1.2
###Markdown
If you have version 3.1.1, you can downgrade if you want, but you don't have to, I just want you to be aware of the issue. Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn's [confusion matrix](https://scikit-learn.org/stable/modules/model_evaluation.htmlconfusion-matrix) function just returns a matrix of numbers, which is hard to read.Scikit-learn docs have an example [plot_confusion_matrix](https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) function. The output looks good, but the code is long and hard to understand. It's written just with numpy and matplotlib.We can write our own function using pandas and seaborn. The code will be shorter and easier to understand. Let's write the function iteratively.
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation = 'vertical')
%config IPCompleter.greedy=True
###Output
_____no_output_____
###Markdown
How many correct predictions were made? How many total predictions were made? What was the classification accuracy? Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report) Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f', xticks_rotation = 'vertical')
cm
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
4351/5129
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
1098+ 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
4351 /5517
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format = '.0f')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
6009
###Output
_____no_output_____
###Markdown
We can get predicted probabilities from logistic regression, but change the threshold from .5 to .9 in order to get the highest-priority pumps. We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
#discrete predictions vs predicted probabilities
pipeline.predict(X_val)
pipeline.predict_proba(X_val)
#predicted probabilities for positive class
pipeline.predict_proba(X_val)[:, 1]
pipeline.predict_proba(X_val)[:, 1] > .5
threshold = .9
pipeline.predict_proba(X_val)[:, 1] > threshold
import seaborn as sns
y_pred_probs = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_probs)
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = .50
import matplotlib.pyplot as plt
y_pred_probs = pipeline.predict_proba(X_val)[:, 1]
y_pred = y_pred_probs > threshold
ax = sns.distplot(y_pred_probs)
ax.axvline(threshold, color = "red")
pd.Series(y_pred).value_counts()
threshold = .92
import matplotlib.pyplot as plt
y_pred_probs = pipeline.predict_proba(X_val)[:, 1]
y_pred = y_pred_probs > threshold
ax = sns.distplot(y_pred_probs)
ax.axvline(threshold, color = "red")
pd.Series(y_pred).value_counts()
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
import pandas as pd
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000: So how many of our recommendations were relevant? ... What's the precision for this subset of 2,000 predictions? In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
# Check scikit-learn version
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 + 4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = 7005 + 171 + 622 + 555 + 332 + 156 + 1098 + 68 + 4351
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
accuracy_score(y_val, y_pred)
sum(y_pred == y_val) / len(y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
cm;
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfunctional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / total_predictions_nonfunctional
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
# Discrete predictions look like this...
pipeline.predict(X_val)
# Predicted probabilities look like this...
pipeline.predict_proba(X_val)
# Predicted probabilities *for the positive class* ...
pipeline.predict_proba(X_val)[:, 1]
# Make predicted probabilities into discrete predictions,
# using a "threshold"
threshold = 0.50
pipeline.predict_proba(X_val)[:, 1] > threshold
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
sns.distplot(y_pred_proba);
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
threshold = 0.92
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
pd.Series(y_pred).value_counts()
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Basseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Basseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k=2000', precision_at_k_2000)
###Output
Precision @ k=2000 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
#check sklearn edition
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.0f')
correct_predictions = 7005 + 332 + 4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = 7005 + 171+ 622 + 555 + 332 + 156 + 1098+ 68 + 4351
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
accuracy_score(y_val, y_pred)
sum(y_val == y_pred) / len(y_val)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.0f')
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
non_functional_correct_pred = 4351
non_functional_correct_pred
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_non_functional_pred = 622 + 156 + 4351
total_non_functional_pred
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
non_functional_precision = non_functional_correct_pred / total_non_functional_pred
non_functional_precision
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
print(classification_report(y_val, y_pred))
non_functional_pumps = 4351 + 68 + 1098
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
non_functional_correct_pred / non_functional_pumps
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, xticks_rotation='vertical', values_format='.0f')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
correct_true_pred = 5032 + 977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
#dicrete predicionts look like this...
pipeline.predict(X_val)
#prediciont probabilities look like this...
pipeline.predict_proba(X_val)
#predicted probabilities for the *positive class* ...
pipeline.predict_proba(X_val)[:,1]
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
#make predicted probabilities into discrete predicitions using a threshold
threshold = .5
pipeline.predict_proba(X_val)[:,1]>threshold
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_proba);
threshold = 0.92
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
pd.Series(y_pred).value_counts()
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.02));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities. Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
top2000 = results.sort_values(by='y_pred_proba', ascending=False)[:2000]
top2000
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
trips = 2000
print(f'Baseline : {trips *0.46} water pump repairs in {trips} trips.')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Prediction {relevant_recommendations} water pump repairs in {trips} trips.')
print(f'Precision = {1972/2000}')
###Output
Precision = 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix, classification_report
plt.rcParams['figure.dpi'] = 150
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_pred = 7005 + 332 + 4351
correct_pred
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_pred = 7005 + 332 + 4351 + 555 + 1098 + 171 + 68 + 622 + 156
total_pred
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_pred / total_pred
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
print(classification_report(y_val, y_pred))
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.head()
y_val.head()
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
False 0.82 0.87 0.84 7798
True 0.84 0.77 0.80 6560
accuracy 0.83 14358
macro avg 0.83 0.82 0.82 14358
weighted avg 0.83 0.83 0.82 14358
###Markdown
- REcall: The of pumps that we correcly identified as needing repair, dividded by the total number of pumps that need repair. - Precision: the number of pumps that we correctly IDENTIFT as needing reoari divided by the number of pumps that need repair Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview
###Code
y_pred = pipeline.predict(X_val)
y_pred_proba = pipeline.predict_proba(X_val)
y_pred.shape
print(y_pred[:10])
# [False,True] - .5 Tipping Point
print(y_pred_proba[:10])
print(y_pred_proba[:10,-1])
###Output
[1. 0.74 0.1 0.05 0.55 0.11 0.24 0.45 0.92 0.21]
###Markdown
Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
/usr/local/lib/python3.6/dist-packages/pandas/core/ops/array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
res_values = method(rvalues)
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = .925
y_pred_prec = y_pred_proba[:,-1] > threshold
y_pred_proba[:10,-1]
y_pred_prec[:10]
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_prob': y_pred_proba[:,-1]})
results.head()
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000 = results.sort_values(by='y_pred_prob', ascending=False)[:2000]
top2000.head()
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
_____no_output_____
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
print('Precision @ k=2000', relevant_recommendations / trips)
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
import sklearn
sklearn.__version__
print('Hello World')
2 + 3
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005+332+4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_prediction =correct_predictions + 171+ 622+156 +555+1098+68
total_prediction
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
class_acc = correct_predictions / total_prediction
class_acc
accuracy_score(y_val, y_pred)
y_pred
sum(y_pred == y_val) / len(y_pred)
y_val
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_prediction_nonfunctional = correct_predictions_nonfunctional + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
precision_nonfunctional = correct_predictions_nonfunctional / total_prediction_nonfunctional
precision_nonfunctional
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
non_functional = 1098+ 68+ 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / non_functional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
5032+977
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
# Discrete predictioni look like this...
pipeline.predict(X_val)
# Predicted probabilities look like this..abs
pipeline.predict_proba(X_val)
# Predicted probabilities for the positive classs*
pipeline.predict_proba(X_val)[:,1]
#Make predicted probabilities in to discrete predictioins,
# using a "threshold"
threshold = 0.50
pipeline.predict_proba(X_val)[:,1] > threshold
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
import seaborn as sns
y_pred_proba = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_proba)
threshold = 0.8
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color = 'red')
from ipywidgets import interact, fixed
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def my_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
def set_threshold(y_true, y_pred_proba, threshold=0.5):
y_pred = y_pred_proba > threshold
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.show()
print(classification_report(y_true, y_pred))
my_confusion_matrix(y_true, y_pred)
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0, 1, 0.01));
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
import pandas as pd
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Basseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Basseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k=2000', precision_at_k_2000)
###Output
Precision @ k=2000 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
#check the version
import sklearn
sklearn.__version__
from sklearn.metrics import plot_confusion_matrix
#values format changes how numbers are displayed. Gets rid of default scientific notation
#cmap is just a color map for the graph.
#xticks sets the orientation of the text on the x axis
plot_confusion_matrix(pipeline,
X_val, y_val,
values_format = '.0f',
cmap = 'Blues',
xticks_rotation = 'vertical');
#Bruno's prefered confustion matrix
#it is easier to customize the output and clean up for presentations
!pip install scikit-plot
import scikitplot as skplt
#The diagonal is where we have made the correct predictions
skplt.metrics.plot_confusion_matrix(y_val, y_pred,
figsize = (8, 6),
title = f'Confusion Matrix: N = {len(y_val)}',
normalize = False);
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
7005 + 332 + 4351
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
len(y_val)
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
11688 / 14358
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
skplt.metrics.plot_confusion_matrix(y_val, y_pred,
figsize = (8, 6),
title = f'Confusion Matrix: N={len(y_val)}',
normalize = False);
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
5129
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
4351/5129
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
1098 + 68 +4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
4351/5517
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
functional 0.81 0.90 0.85 7798
functional needs repair 0.58 0.32 0.41 1043
non functional 0.85 0.79 0.82 5517
accuracy 0.81 14358
macro avg 0.75 0.67 0.69 14358
weighted avg 0.81 0.81 0.81 14358
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
###Output
_____no_output_____
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
y_pred
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
skplt.metrics.plot_confusion_matrix(y_val, y_pred,
figsize = (8, 6),
title = f'Confusion Matrix: N={len(y_val)}',
normalize = False);
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
#this is the predicted probabilites for false/true classes
#basically of all the 100 estimators we used, it gives us the
#percentage of how many predricted true or false
#like the first row, 100 percent predicted true
#where as the seconf row, only 74% predicted true
pipeline.predict_proba(X_val)
pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
threshold = .5
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
y_pred_proba > threshold
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
import seaborn as sns
sns.distplot(y_pred_proba);
results = pd.DataFrame({'y_val': y_val, 'y_pred_proba': y_pred_proba})
results
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000 = results.sort_values(by = 'y_pred_proba', ascending = False)[:2000]
top2000
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
precision_at_k_2000
###Output
_____no_output_____
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics- get and interpret the **confusion matrix** for classification models- use classification metrics: **precision, recall**- understand the relationships between precision, recall, **thresholds, and predicted probabilities**, to help **make decisions and allocate budgets**- Get **ROC AUC** (Receiver Operating Characteristic, Area Under the Curve) SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries- category_encoders- ipywidgets- matplotlib- numpy- pandas- scikit-learn- seaborn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import sklearn
print(sklearn.__version__)
###Output
0.22.2.post1
###Markdown
Get and interpret the confusion matrix for classification models Overview First, load the Tanzania Waterpumps data and fit a model. (This code isn't new, we've seen it all before.)
###Code
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
FunctionTransformer(wrangle, validate=False),
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.8140409527789386
###Markdown
Follow AlongScikit-learn added a [**`plot_confusion_matrix`**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html) function in version 0.22!
###Code
from sklearn.metrics import plot_confusion_matrix
plt.rcParams['figure.dpi'] = 300
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many correct predictions were made?
###Code
correct_predictions = 7005 + 332 +4351
correct_predictions
###Output
_____no_output_____
###Markdown
How many total predictions were made?
###Code
total_predictions = 7005 + 171 + 622 + 555 + 332 + 156 + 1098 + 68 + 4351
total_predictions
###Output
_____no_output_____
###Markdown
What was the classification accuracy?
###Code
correct_predictions / total_predictions
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Use classification metrics: precision, recall Overview[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report)
###Code
###Output
_____no_output_____
###Markdown
Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)> Both precision and recall are based on an understanding and measure of relevance.> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results. Follow Along [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context))
###Code
cm = plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical')
cm;
###Output
_____no_output_____
###Markdown
How many correct predictions of "non functional"?
###Code
correct_predictions_nonfunctional = 4351
###Output
_____no_output_____
###Markdown
How many total predictions of "non functional"?
###Code
total_predictions_nonfunctional = 4351 + 156 + 622
###Output
_____no_output_____
###Markdown
What's the precision for "non functional"?
###Code
correct_predictions_nonfunctional / total_predictions_nonfunctional
print (classification_report(y_val, y_pred))
###Output
_____no_output_____
###Markdown
How many actual "non functional" waterpumps?
###Code
actual_nonfunctional = 1098 + 68 + 4351
###Output
_____no_output_____
###Markdown
What's the recall for "non functional"?
###Code
correct_predictions_nonfunctional / actual_nonfunctional
###Output
_____no_output_____
###Markdown
Understand the relationships between precision, recall, thresholds, and predicted probabilities, to help make decisions and allocate budgets Overview Imagine this scenario...Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
###Code
len(test)
###Output
_____no_output_____
###Markdown
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
###Code
len(train) + len(val)
###Output
_____no_output_____
###Markdown
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
###Code
y_train.value_counts(normalize=True)
2000 * 0.46
random_inspections = 2000
print(f'With {random_inspections} random inspections, we expect to repair {0.46*random_inspections} waterpumps')
###Output
With 2000 random inspections, we expect to repair 920.0 waterpumps
###Markdown
**Can you do better than random at prioritizing inspections?** In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
###Code
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We already made our validation set the same size as our test set.
###Code
len(val) == len(test)
###Output
_____no_output_____
###Markdown
We can refit our model, using the redefined target.Then make predictions for the validation set.
###Code
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
###Output
_____no_output_____
###Markdown
Follow Along Look at the confusion matrix:
###Code
plot_confusion_matrix(pipeline, X_val, y_val, values_format='.0f', xticks_rotation='vertical');
###Output
_____no_output_____
###Markdown
How many total predictions of "True" ("non functional" or "functional needs repair") ?
###Code
###Output
_____no_output_____
###Markdown
We don't have "budget" to take action on all these predictions- But we can get predicted probabilities, to rank the predictions. - Then change the threshold, to change the number of positive predictions, based on our budget. Get predicted probabilities and plot the distribution
###Code
pipeline.predict(X_val)
# What do the predicted probabilities look like?
pipeline.predict_proba(X_val)
pipeline.predict_proba(X_val)[:,1]
pipeline.predict_proba(X_val)[:,0]
###Output
_____no_output_____
###Markdown
Change the threshold
###Code
import seaborn as sns
threshold = 0.925
# pipeline.predict_proba(X_val)[:,1] > threshold
y_pred_prob = pipeline.predict_proba(X_val)[:,1]
sns.distplot(y_pred_prob)
plt.axvline(threshold, color='red');
sum(pipeline.predict_proba(X_val)[:,1] > threshold)
###Output
_____no_output_____
###Markdown
Or, get exactly 2,000 positive predictions Identify the 2,000 waterpumps in the validation set with highest predicted probabilities.
###Code
results = pd.DataFrame({'y_val': y_val, 'y_pred_prob': y_pred_prob})
results
top2000 = results.sort_values(by='y_pred_prob', ascending=False)[:2000]
###Output
_____no_output_____
###Markdown
Most of these top 2,000 waterpumps will be relevant recommendations, meaning `y_val==True`, meaning the waterpump is non-functional or needs repairs.Some of these top 2,000 waterpumps will be irrelevant recommendations, meaning `y_val==False`, meaning the waterpump is functional and does not need repairs.Let's look at a random sample of 50 out of these top 2,000:
###Code
top2000.sample(n=50)
###Output
_____no_output_____
###Markdown
So how many of our recommendations were relevant? ...
###Code
trips = 2000
print(f'Baseline: {trips * 0.46} waterpump repairs in {trips} trips')
relevant_recommendations = top2000['y_val'].sum()
print(f'With model: Predict {relevant_recommendations} waterpump repairs in {trips} trips')
###Output
Baseline: 920.0 waterpump repairs in 2000 trips
With model: Predict 1972 waterpump repairs in 2000 trips
###Markdown
What's the precision for this subset of 2,000 predictions?
###Code
precision_at_k_2000 = relevant_recommendations / trips
print('Precision @ k=2000', precision_at_k_2000)
###Output
Precision @ k=2000 0.986
###Markdown
In this scenario ... Accuracy _isn't_ the best metric!Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)Then, evaluate with the precision for "non functional"/"functional needs repair".This is conceptually like **Precision@K**, where k=2,000.Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)> Precision at k is the proportion of recommended items in the top-k set that are relevant> Mathematically precision@k is defined as: `Precision@k = ( of recommended items @k that are relevant) / ( of recommended items @k)`> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.We asked, can you do better than random at prioritizing inspections?If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)But using our predictive model, in the validation set, we succesfully identified over 1,900 waterpumps in need of repair!So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.We will predict which 2,000 are most likely non-functional or in need of repair.We estimate that approximately 1,900 waterpumps will be repaired after these 2,000 maintenance visits.So we're confident that our predictive model will help triage and prioritize waterpump inspections. But ...This metric (~1,900 waterpumps repaired after 2,000 maintenance visits) is specific for _one_ classification problem and _one_ possible trade-off.Can we get an evaluation metric that is generic for _all_ classification problems and _all_ possible trade-offs?Yes — the most common such metric is **ROC AUC.** Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.** Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
###Code
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive rate (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPR at various thresholds
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_val, y_pred_prob)
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# See the results on a plot.
# This is the "Receiver Operating Characteristic" curve
plt.scatter(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# Use scikit-learn to calculate the area under the curve.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_prob)
###Output
_____no_output_____ |
CustomCoverNet_model.ipynb | ###Markdown
Package installationsUncomment if use GPU rent services
###Code
# !pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install nuscenes-devkit
# # Fix error : libGL.so.1: cannot open shared object file: No such file or directory
# !apt install -y libgl1-mesa-glx
import torch
print(torch.cuda.is_available())
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Running on ', device)
print(f"Device count: {torch.cuda.device_count()}")
import os
import zipfile
CURRENT_PATH = f'{os.getcwd()}/'
###Output
_____no_output_____
###Markdown
Unzip nuScenes dataset
###Code
# DatasetName = 'Dataset.zip'
#
# print(f'{CURRENT_PATH}{DatasetName}')
#
# with zipfile.ZipFile(DatasetName, 'r') as zip_ref:
# zip_ref.extractall(f'{CURRENT_PATH}{DatasetName}')
###Output
_____no_output_____
###Markdown
NuScenes initialization
###Code
from nuscenes import NuScenes
from nuscenes.prediction import PredictHelper
from nuscenes.eval.prediction.splits import get_prediction_challenge_split
import matplotlib.pyplot as plt
# This is the path where you stored your copy of the nuScenes dataset.
DATAROOT = 'Dataset/'
history_length = 2
prediction_length = 6
# Use v1.0-trainval or v1.0-mini
nusc = NuScenes('v1.0-trainval', dataroot=DATAROOT, verbose=False)
helper = PredictHelper(nusc)
train = get_prediction_challenge_split("train", dataroot=DATAROOT)
validation = get_prediction_challenge_split("train_val", dataroot=DATAROOT)
test = get_prediction_challenge_split("val", dataroot=DATAROOT)
print(f"Train len: {len(train)}\nVal len: {len(validation)}\nTest len: {len(test)}")
train = train[:12000]
validation = validation[:5000]
test = test[:5000]
import pickle
PATH_TO_EPSILON_8_SET = f"{DATAROOT}prediction_trajectory_sets/epsilon_8.pkl"
trajectories_set_8 = pickle.load(open(PATH_TO_EPSILON_8_SET, 'rb'))
trajectories_set_8 = torch.Tensor(trajectories_set_8)
###Output
_____no_output_____
###Markdown
Init datasets and dataloaders
###Code
from torch.utils.data import DataLoader, Dataset
from nuscenes.prediction.input_representation.static_layers import StaticLayerRasterizer
import numpy as np
from typing import List
class NuscenesDataset(Dataset):
def __init__(self, tokens: List[str], helper: PredictHelper):
self.tokens = tokens
self.static_layer_representation = StaticLayerRasterizer(helper)
def __len__(self):
return len(self.tokens)
def __getitem__(self, index: int):
token = self.tokens[index]
instance_token, sample_token = token.split("_")
image = self.static_layer_representation.make_representation(instance_token, sample_token)
image = torch.Tensor(image).permute(2, 0, 1)
# NaN Values processing
def agent_param_processing(value):
if np.isnan(value):
return -1
return value
vel = helper.get_velocity_for_agent(instance_token, sample_token)
vel = agent_param_processing(vel)
accel = helper.get_acceleration_for_agent(instance_token, sample_token)
accel = agent_param_processing(accel)
heading_cr = helper.get_heading_change_rate_for_agent(instance_token, sample_token)
heading_cr = agent_param_processing(heading_cr)
agent_state_vector = torch.Tensor([vel, accel, heading_cr])
ground_truth = helper.get_future_for_agent(instance_token, sample_token, prediction_length, in_agent_frame=True)
# Convert to [batch_size, 1, 12, 2]
# Because loss function need that format
ground_truth = np.expand_dims(ground_truth, 0)
return image, agent_state_vector, ground_truth
batch_size = 32
train_ds = NuscenesDataset(train, helper)
train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
train_val_ds = NuscenesDataset(validation, helper)
train_val_dl = DataLoader(train_ds, batch_size=batch_size * 2)
image, state, ground_truth = next(iter(train_dl))
print(image.size())
print(state.size())
print(ground_truth.size())
print("Preprocessing states:")
print(state)
###Output
torch.Size([32, 3, 500, 500])
torch.Size([32, 3])
torch.Size([32, 1, 12, 2])
Preprocessing states:
tensor([[ 3.8829e+00, 4.2858e+00, 3.4914e-02],
[ 7.4828e+00, 9.7190e-01, 0.0000e+00],
[ 2.6254e+00, 1.1437e-01, 9.6029e-02],
[ 5.5119e+00, 7.8166e-01, 3.1396e-01],
[ 1.7754e+00, 2.1683e-01, -1.2660e-01],
[ 4.9390e+00, 2.2711e-03, -3.4954e-02],
[ 6.4788e-01, -2.9224e-04, 0.0000e+00],
[ 1.3208e+00, 1.7265e+00, -4.1853e-02],
[ 1.0706e+01, -2.5083e-03, 1.7458e-02],
[ 1.2010e+01, 1.1310e+00, 8.8835e-16],
[ 5.2919e+00, -1.8864e+00, 3.3168e-01],
[ 9.6307e+00, -4.3436e-01, 0.0000e+00],
[ 1.0469e+01, -1.0000e+00, 8.7288e-03],
[ 5.1837e+00, -3.1336e-01, -5.0745e-01],
[ 9.7165e-01, -2.1897e+00, 0.0000e+00],
[ 8.9414e+00, -1.0000e+00, 0.0000e+00],
[ 9.0079e+00, -5.9894e-03, 0.0000e+00],
[ 1.9260e+00, 1.9186e+00, -1.0463e-02],
[ 1.6071e+00, 1.9847e-04, 0.0000e+00],
[ 1.1468e+01, -3.7591e-02, 0.0000e+00],
[ 1.0496e+01, 5.1408e-04, 0.0000e+00],
[ 3.2098e+00, -7.6509e-01, -4.8983e-01],
[ 8.8263e-01, -1.9217e-01, -6.9830e-02],
[ 6.5537e+00, -4.9473e+00, -2.3317e-02],
[ 5.6744e+00, -6.5802e-03, -3.4913e-02],
[ 9.4840e+00, -3.5710e-03, 3.3251e-02],
[ 6.3178e+00, 1.1400e-02, 5.2372e-02],
[ 8.0353e+00, -1.7596e-03, 0.0000e+00],
[ 1.2223e+00, 1.8329e-01, 3.8908e-02],
[ 6.7980e+00, 8.2123e-04, -2.9085e-02],
[ 1.1345e+01, 2.3408e-03, 0.0000e+00],
[ 8.0727e+00, 1.0155e+00, -1.0438e-02]])
###Markdown
Init ML prediction model
###Code
from nuscenes.prediction.models.backbone import ResNetBackbone
import torchvision.models as models
# Torchvision backbone
backbone = models.resnext50_32x4d(pretrained=True)
# Build-in backbone
#backbone = ResNetBackbone('resnet50')
# Set backbone to non-trainable
def set_parameter_requires_grad(model):
for param in model.parameters():
param.requires_grad = False
set_parameter_requires_grad(backbone)
from torch.optim import SGD
from nuscenes.prediction.models.covernet import CoverNet, ConstantLatticeLoss
NUM_MODES = 64
model = CoverNet(backbone, num_modes=NUM_MODES)
model = model.to(device)
loss_function = ConstantLatticeLoss(trajectories_set_8)
# Pass to optimizer only params with requires_grad
params_to_update = []
for name,param in model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
optimizer = SGD(params_to_update, lr=5e-4, momentum=0.9, weight_decay=5e-4)
from tqdm import tqdm
import copy
import time
def loss_batch(model, loss_func, img, state_vec, ground_truth, opt=None):
img = img.to(device)
state_vec = state_vec.to(device)
ground_truth = ground_truth.to(device)
predicted_logits = model(img, state_vec)
loss = loss_func(predicted_logits, ground_truth)
# For validation optimizer is None, thus we dont perform backprop
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
# Return losses and amount of items
# print(f"{loss.item()}; {len(img)}")
return loss.item(), len(img)
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
best_loss = 999.0
best_model_wts = copy.deepcopy(model.state_dict())
for epoch in range(epochs):
start_epoch_time = time.time()
print(f'Epoch: {epoch + 1}/{epochs}')
print('-' * 10)
model.train()
for img, state_vec, gt in tqdm(train_dl):
loss_batch(model, loss_func, img, state_vec, gt, opt)
model.eval()
print("Validation step")
with torch.no_grad():
# TODO: Using tqdm
losses, nums = zip(
*[loss_batch(model, loss_func, img, state_vec, gt) for img, state_vec, gt in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
# deep copy the model
if val_loss < best_loss:
best_loss = val_loss
best_model_wts = copy.deepcopy(model.state_dict())
print(f"Epoch {epoch + 1}; Loss: {val_loss:0.2f}; Best: {best_loss:0.2f} Time: {(time.time() - start_epoch_time):0.2f} sec;")
# epochs = 3 # how many epochs to train for
# fit(epochs, model, loss_function, optimizer, train_dl, train_val_dl)
# torch.save(model.state_dict(), '/root/model.pth')
model.load_state_dict(torch.load('./Models/model_data_20k_e25_loss_1-50.pth'))
model.eval()
###Output
_____no_output_____
###Markdown
Metrics
###Code
NPY_DATA_PATH = './NpyDataset/'
test_features = np.load(f'{NPY_DATA_PATH}data_test_features_5k.npy')
test_states = np.load(f'{NPY_DATA_PATH}data_test_states_5k.npy')
test_labels = np.load(f'{NPY_DATA_PATH}data_test_labels_5k.npy')
def PlotPrediction(future, predict):
plt.figure(figsize=(6, 6))
plt.scatter(future[:, 1], -future[:, 0], c='orange', s=10)
plt.scatter(predict[:, 1], -predict[:, 0], c='g', s=10)
# Keep aspect ratio of axis
plt.axis('equal')
plt.show()
import nuscenes.eval.prediction.metrics as metrics
trajectories_set_8_np = trajectories_set_8.numpy()
metric_functions = [metrics.MinFDEK([1], aggregators=[metrics.RowMean()]),
metrics.MinADEK([5, 10], aggregators=[metrics.RowMean()]),
metrics.MissRateTopK([5, 10], tolerance=2, aggregators=[metrics.RowMean()])]
num_predictions = len(test_labels) # Amount of prediction rows
metrics_container = {metric.name: np.zeros((num_predictions, metric.shape)) for metric in metric_functions}
metrics_container
from tqdm import tqdm
for idx, x in enumerate(tqdm(test_labels)):
#for idx in range(len(test_labels)):
# Make prediction
img = torch.Tensor(test_features[idx].reshape((500, 500, 3))).permute(2, 0, 1).unsqueeze(0)
img = img.to(device)
state = torch.Tensor(np.array([test_states[idx]])).to(device)
state = state.to(device)
logits = model(img, state)
mode_probabilities = np.array([logits.cpu().detach().numpy()[0]])[0]
# Create prediction object
instance_tkn, sample_tkn = test[idx].split("_")
prediction = metrics.Prediction(instance_tkn, sample_tkn, trajectories_set_8_np, mode_probabilities)
# Get ground_truth
gt = test_labels[idx].reshape((12, 2))
# Calculate metrics
for metric in metric_functions:
metrics_container[metric.name][idx] = metric(gt, prediction)
from collections import defaultdict
from typing import List, Dict, Any
aggregations: Dict[str, Dict[str, List[float]]] = defaultdict(dict)
for metric in metric_functions:
for agg in metric.aggregators:
aggregations[metric.name][agg.name] = agg(metrics_container[metric.name])
aggregations
categories = []
for sample in train:
instance_tkn, sample_tkn = sample.split("_")
instance_category_token = nusc.get('instance', instance_tkn)['category_token']
category_name = nusc.get('category', instance_category_token)['name']
if category_name not in categories:
categories.append(category_name)
categories
###Output
_____no_output_____ |
1-mta-turnstiles/code/01-Benson-Final.ipynb | ###Markdown
Project 01-Benson — Final Kayleigh Li David Luther 0. List of Imports
###Code
from __future__ import print_function, division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import googlemaps
import re
import json
import pickle
import copy
import os
from geopy.distance import vincenty
from IPython.display import Image
# sets Google Maps API key to a variable
dl_google_key = os.environ['gmAPI']
%matplotlib inline
# customize settings
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 30)
pd.set_option('display.precision', 3)
###Output
_____no_output_____
###Markdown
1. Loading the MTA Subway Turnstile Dataset
###Code
def load_it_up(datelist):
"""Takes a list of dates to load data for a range of dates and concatenate
it all into one dataframe.
---
IN: List of date strings in yymmdd format corresponding to weeks of available
turnstile data (list)
"""
df = pd.DataFrame()
base_url = 'http://web.mta.info/developers/data/nyct/turnstile/turnstile_'
for date in datelist:
csv_url = f"{base_url}{date}.txt"
print("Loading:", csv_url)
new_df = pd.read_csv(csv_url)
df = pd.concat([df, new_df], ignore_index = True)
print("Load complete!")
return df
datelist = ['170506', '170513', '170520', '170527', '170603']
may_df_raw = load_it_up(datelist)
###Output
Loading: http://web.mta.info/developers/data/nyct/turnstile/turnstile_170506.txt
Loading: http://web.mta.info/developers/data/nyct/turnstile/turnstile_170513.txt
Loading: http://web.mta.info/developers/data/nyct/turnstile/turnstile_170520.txt
Loading: http://web.mta.info/developers/data/nyct/turnstile/turnstile_170527.txt
Loading: http://web.mta.info/developers/data/nyct/turnstile/turnstile_170603.txt
Load complete!
###Markdown
2. Exploratory Analysis and Preliminary Cleaning of the Data A look at 10 random rows:
###Code
may_df_raw.sample(10)
###Output
_____no_output_____
###Markdown
Some column names have unexpeceted spaces:
###Code
may_df_raw.columns
###Output
_____no_output_____
###Markdown
...so we strip them.
###Code
for col in may_df_raw.columns:
may_df_raw.rename(columns = {col: col.strip()}, inplace=True)
may_df_raw.columns
###Output
_____no_output_____
###Markdown
Values for 'ENTRIES' and 'EXITS are impossibly large — turns out they are cumulative values.
###Code
may_df_raw.describe()
###Output
_____no_output_____
###Markdown
Determing Unique Station Names
###Code
may_df_raw.STATION.value_counts()
###Output
_____no_output_____
###Markdown
The number of unique values in STATION is 376, while according to official record, there should be 472 stations. Thus we know that STATION does not provide unique stations names, though these can be provided (with some argument regarding the bigger stations) by combining STATION and LINENAME. Marked for further investigation (+ explore relationships amongst all the attributes).
###Code
may_df_raw["UNIQUE_STATION"] = may_df_raw["STATION"] + ' - ' + may_df_raw["LINENAME"]
may_df_raw["UNIQUE_STATION"].value_counts()
###Output
_____no_output_____
###Markdown
Still, all names are not necessarily unique, as shown by the fact that there are still three instances of 34 ST-Penn Station (below). However, it is in reality two different stations, one serving the ACE along 8th Avenue, and the other serving the 123 along 7th Avenue. For these instances where a large station is separated into two or three, we choose to keep it that way.
###Code
may_df_raw[may_df_raw['UNIQUE_STATION'].str.contains('34')].groupby('UNIQUE_STATION').sum()
###Output
_____no_output_____
###Markdown
We rename some stations to remove duplicates (and help Google later).
###Code
# this station appeared with a duplicate name, henceforth united
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == '4AV-9 ST - DFGMNR', 'UNIQUE_STATION'] = '4 AV-9 ST - DFGMNR'
# Google thinks this one is in Manhattan, so we clarify to Brooklyn
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'FULTON ST - G', 'UNIQUE_STATION'] = 'FULTON ST (BKLYN) - G'
# these get clarified because the Google Maps API won't recognize them otherwise
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'CONEY IS-STILLW - DFNQ', 'UNIQUE_STATION'] = 'CONEY ISLAND-STILLWELL AV - DFNQ'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'HOWARD BCH JFK - A', 'UNIQUE_STATION'] = 'HOWARD BEACH JFK - A'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == '21 ST-QNSBRIDGE - F', 'UNIQUE_STATION'] = '21 ST-QUEENSBRIDGE - F'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'HOYT-SCHER - ACG', 'UNIQUE_STATION'] = 'HOYT-SCHERMERHORN - ACG'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'COURT SQ - EMG', 'UNIQUE_STATION'] = 'COURT SQUARE - EMG'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'GRD CNTRL-42 ST - 4567S', 'UNIQUE_STATION'] = 'GRAND CENTRAL-42 ST - 4567S'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'AQUEDUCT N.COND - A', 'UNIQUE_STATION'] = 'AQUEDUCT N.CONDUIT - A'
may_df_raw.loc[may_df_raw['UNIQUE_STATION'] == 'FULTON ST - 2345ACJZ', 'UNIQUE_STATION'] = 'FULTON STREET - 2345ACJZ'
# confirm that duplicate station names are now made unique when merged with line name
# for station in sorted(may_df_raw["UNIQUE_STATION"].unique()):
# print(station)
###Output
_____no_output_____
###Markdown
Sort for Sequential Turnstile Data Because we loaded multiple weeks of data in series, the sequence of data for each station and turstile is broken up. Becuase SCP numbers are not unique within the system — or within a station — we have to sort in the following order:1. unique station+ remote unit (UNIT)+ control area (C/A)+ turnstile (SCP)+ date+ time
###Code
may_df_sorted = may_df_raw.sort_values(['UNIQUE_STATION', 'UNIT', 'C/A', 'SCP', 'DATE', 'TIME']).reset_index()
may_df_sorted.head(5)
###Output
_____no_output_____
###Markdown
What are all the divisions we have present?
###Code
may_df_sorted.DIVISION.unique()
###Output
_____no_output_____
###Markdown
* BMT, IRT, and IND are the three companies that were absorbed into the MTA* PTH is the PATH train into NJ* SRT is perhaps the Staten Island Railway? Filter Unneccsary Columns and Reorder Columns We no longer need STATION and LINE, as those have been merged to provide reasonably unique station identification. An exploration of DESC reveals that it has no significant impact on the actual count of entries/exits, so that goes too.
###Code
may_df = may_df_sorted.filter(['UNIQUE_STATION', 'UNIT', 'C/A', 'SCP', 'DIVISION', 'DATE', 'TIME',
'ENTRIES', 'EXITS'])
# look at 10 random rows
may_df.sample(10, random_state=23)
print("Unique Values")
for col in may_df.columns:
print(f" *\t{col}:", len(may_df[col].unique()))
# 475 unique subway stations in the MTA system - very close to official record
###Output
Unique Values
* UNIQUE_STATION: 475
* UNIT: 467
* C/A: 734
* SCP: 231
* DIVISION: 6
* DATE: 35
* TIME: 32851
* ENTRIES: 788004
* EXITS: 767013
###Markdown
3. Determining Total Entries and Exits per Station Calculating the difference between one cell in ENTRIES or EXITS and the previous theoretically yields the total number of people through the gate in the time elapsed between readings. With reliable figures, one could simply sum the column to find out how many people walked through the gates over the duration of the entire dataframe.
###Code
may_df['ENT DIFF'] = may_df['ENTRIES'].diff()
may_df['EX DIFF'] = may_df['EXITS'].diff()
may_df.head(5)
###Output
_____no_output_____
###Markdown
Negative Numbers and Outliers Through exploring the differential between current and previous cells in ENTRIES or EXITS, over 10,000 negative values were found. Three major scenarios explain the negative counts:1. Switch of SCPs or C/As; 2. Resets of a turnstile 3. Backward-counting of some turnstilesWhen we reach the end of one turnstile's count and continue with another (switch from one SCP or C/A to another), the differnential does not represent an actual count, and must be reset to 0.Occasionally, a turnstile is reset or replaced. In the first case the differential will be a negative number, and in the second case the differential could be negative or positive. In either case, the differential will almost always be several orders of magnitude larger than the largest feasible entry or exit count, and will *not* represent actual people passing through the gate.Some turnstiles count backwards, so the differential between current and previous cells in ENTRIES or EXITS will be a negative number in those cases, but the absolute value of this number represents a genuine count.Another case in which negative numbers appear is that in which several readings from a different turnstile are mislabeled with the current turnstile's SCP number and mixed into the series. This generates alternating negative and positive differentials.The following makes some corrections and attemps to figure out which numbers might be set as thresholds for negatives and positives for the purpose of filtering out irrelevant data. Correcting for Turnstile Transition To account for the transition from the readings of one turnstile to another, we set ENT_DIFF and EX_DIFF values to 0 whenever a difference in SCP or C/A numbers is found between current row and previous.
###Code
may_df.loc[(may_df['SCP'] != may_df['SCP'].shift(+1)) |
(may_df['C/A'] != may_df['C/A'].shift(+1)),
['ENT DIFF', 'EX DIFF']] = 0.0
###Output
_____no_output_____
###Markdown
Some Examples of Anomalies Turnstile reset or swap-out.
###Code
may_df[656480:656485]
###Output
_____no_output_____
###Markdown
A backwards-counting turnstile.
###Code
may_df[268082:268087]
###Output
_____no_output_____
###Markdown
Data from one turnstile interspersed with another.
###Code
may_df[725203:725215]
###Output
_____no_output_____
###Markdown
Finding Thresholds for Filtering In order to inspect the right "neighborhood" of entry and exit differentials to determine the best filter value(s), we tried to determine a realistic maximum number of people that could flow through a single turnstile during the observational period of four hours. One swipe every four seconds seemed to be a realistic estimation, which yields 900 per hour, or 3600 per four hours.Theoretically, one filter value should be able to be used on the absolute value of the whole series in each ENT_DIFF and EX_DIFF column. However, there are some cases only observed with positive counts that make an argument for setting different threshold values for negative and positive values.
###Code
# to tidy things up, if we have time
# def histogram_maker(series, min_val, max_val, xlabel, ylabel, title):
# pass
###Output
_____no_output_____
###Markdown
Histogram of all Exits/Entrances (Abs Value) This takes the absolute value of all unfiltered entrances and exits and plots them as a histogram.
###Code
ents = may_df['ENT DIFF'].abs()
ents_and_exes = ents.append(may_df['EX DIFF'].abs(), ignore_index=True).fillna(0)
fig, ax = plt.subplots(1, 1, figsize=(16,8))
ax.hist(ents_and_exes, bins=200)
ax.set_yscale('log')
ax.grid()
plt.xlabel('Differential Values of Consecutive Entry and Exit Readings', fontsize=15)
plt.ylabel('Number of Values in Bin Range', fontsize=15)
plt.title('Frequency of Differential Values', fontsize=20)
plt.grid()
# plt.savefig('../img/differential_hist_grid.png', dpi=200, bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
Negative Differentials To find a threshold to use for our filter on negative numbers, we look at summary stats and histograms of all negative values in ENT_DIFF and EX_DIFF columns. Setting different numbers for min_val and max_val allow for a targeted inspection of a specific range of values.
###Code
# specify range to inspect
min_val = -10e10
max_val = 0
ent_neg_slice = may_df[(may_df['ENT DIFF'] > min_val) & (may_df['ENT DIFF'] < max_val)]['ENT DIFF']
ex_neg_slice = may_df[(may_df['EX DIFF'] > min_val) & (may_df['EX DIFF'] < max_val)]['EX DIFF']
fig, ax = plt.subplots(2, 1, figsize=(16,10))
plots_to_make = [ent_neg_slice, ex_neg_slice]
for series, ax_obj in zip(plots_to_make, ax):
ax_obj.hist(series, bins=100)
ax_obj.set_yscale('log')
# if x is log, need to take absolute value series
# ax_obj.set_xscale('log')
ax_obj.grid()
entries_neg = may_df[may_df['ENT DIFF'] < 0]['ENT DIFF']
exits_neg = may_df[may_df['EX DIFF'] < 0]['EX DIFF']
print("ENTRIES (NEG):")
print(entries_neg.describe())
print("Median:", entries_neg.median(), "\n")
print("EXITS (NEG):")
print(exits_neg.describe())
print("Median:", exits_neg.median())
###Output
ENTRIES (NEG):
count 7.668e+03
mean -2.268e+05
std 1.205e+07
min -8.566e+08
25% -5.560e+02
50% -2.480e+02
75% -6.700e+01
max -1.000e+00
Name: ENT DIFF, dtype: float64
Median: -248.0
EXITS (NEG):
count 6.110e+03
mean -3.362e+05
std 1.742e+07
min -1.313e+09
25% -3.570e+02
50% -1.310e+02
75% -3.400e+01
max -1.000e+00
Name: EX DIFF, dtype: float64
Median: -131.0
###Markdown
This allows inspection of actual values along with their indices so we can find and inspect to see what's going on around them.
###Code
(entries_neg
# .sample(20)
.sort_values()
# .head(10)
)[0:20]
###Output
_____no_output_____
###Markdown
Upon inspecting values near the 3600/hour threshold, we found out that nearly all values between -3000 > n > 0 can be attributed to a reverse-counting turnstile, and values n < -3000 can be attrubuted to a reset or another anomaly. Positive Differentials Again, we look at histograms and summary stats of positive entrance and exit differentials to hone in on a filter threshold.
###Code
# specify range to inspect
min_val = 0
max_val = 10e10
ent_pos_slice = may_df[(may_df['ENT DIFF'] > min_val) & (may_df['ENT DIFF'] < max_val)]['ENT DIFF']
ex_pos_slice = may_df[(may_df['EX DIFF'] > min_val) & (may_df['EX DIFF'] < max_val)]['EX DIFF']
plots_to_make = [ent_pos_slice, ex_pos_slice]
fig, ax = plt.subplots(2, 1, figsize=(16,10))
for series, ax_obj in zip(plots_to_make, ax):
ax_obj.hist(series, bins=100)
ax_obj.set_yscale('log')
# ax_obj.set_xscale('log')
ax_obj.grid()
entries_pos = may_df[may_df['ENT DIFF'] > 0]['ENT DIFF']
exits_pos = may_df[may_df['EX DIFF'] > 0]['EX DIFF']
print("ENTRIES (POS):")
print(entries_pos.describe())
print("Median:", entries_pos.median(), "\n")
print("EXITS (POS):")
print(exits_pos.describe())
print("Median:", exits_pos.median())
(entries_pos
# .sample(20)
.sort_values(ascending=False)
.head(10)
)
(entries_pos[(entries_pos > 3600) & (entries_pos < 8000)]
.sort_values(ascending=False)
.head(20))
###Output
_____no_output_____
###Markdown
Completely legitimate count of 4310 in a four hour period at Grand Central Station — one of the few places where this might be believable.
###Code
may_df[650136:650141]
###Output
_____no_output_____
###Markdown
A couple cases where turnstiles went offline from between one cycle to several days. During this period they kept on counting, then dumped all counts for the offline period when online again.
###Code
may_df[650197:650202]
may_df[970479:970484]
###Output
_____no_output_____
###Markdown
Unlike the negative numbers, there are a number of counts in the 4000s that appear to be legitimate. As these are quite significant values, we would rather not filter them out of our count. Several values between 5000 and 8000 seem to represent situations in which one turnstile goes offline for several days but keeps on counting, then returns a value representing this whole offline period. This would be problematic if we were looking at daily or hourly counts, but not for a view of the entire month. And We Have Thresholds Inspections of cases revealed that any negative value between -3000 and 0 in the differentials column represents a trusted reading, as does any positive value between 0 and 8000. We honed in on this range by figuring that the max number of people through a turnstile would be one every four seconds, or 15 per minute, which comes out to 900 per hour. Since readings on turnstiles were taken every four hours, we would expect values up to 3600 to be legitimate. A small number of special cases revealed turnstiles that went offline for several days but kept on counting. When they came back online, they dumped the count for the last three days, which ended up being between 4000 and 8000. Because we are looking at the total traffic over the duration of the data, these are valid counts. Filtering Anomalies If the value in either ENT_DIFF or EX_DIFF falls outside of the range, we set both values to 0.
###Code
may_df.loc[
(may_df['ENT DIFF'] < -3000) | (may_df['EX DIFF'] < -3000),
['ENT DIFF', 'EX DIFF']] = 0.0
may_df.loc[
(may_df['ENT DIFF'] > 8000) | (may_df['EX DIFF'] > 8000),
['ENT DIFF', 'EX DIFF']] = 0.0
###Output
_____no_output_____
###Markdown
We take the absolute value of all diffential and create two new columns for them.
###Code
may_df['ENTRY DIFFS (ABS)'] = may_df['ENT DIFF'].abs()
may_df['EXIT DIFFS (ABS)'] = may_df['EX DIFF'].abs()
###Output
_____no_output_____
###Markdown
This following cell sets the threshold of the abs of ENT DIFFS and EX DIFFS at 3600, instead of -3000 to 8000. To run, uncomment the code and comment the entire box right under "Filtering Anomalies," then run from the top.**Not surprisingly, the final rankings of the busiest stations are no different using this pure estimation approach.** Whatever advantage is lost due to slightly less accurate counts is more than gained in its expedience. Note to self for the next time.
###Code
# may_df.loc[
# (may_df['ENTRY DIFFS (ABS)'] > 3600) | (may_df['EXIT DIFFS (ABS)'] > 3600),
# ['ENTRY DIFFS (ABS)', 'EXIT DIFFS (ABS)']] = 0.0
###Output
_____no_output_____
###Markdown
Confirmed:
###Code
may_df[656480:656485].filter(['UNIQUE_STATION',
'UNIT',
'C/A',
'SCP',
'DIVISION',
'DATE',
'TIME',
'ENTRIES',
'EXITS',
'ENTRY DIFFS (ABS)',
'EXIT DIFFS (ABS)'
])
may_df[725203:725215]
###Output
_____no_output_____
###Markdown
4. Calculating the Busiest Stations ...is as easy as adding the absolute value of the entry and exit differentials together, then sorting on that value.
###Code
may_df['TOTAL TRAFFIC'] = may_df['ENTRY DIFFS (ABS)'] + may_df['EXIT DIFFS (ABS)']
may_traffic_df = (may_df
.groupby('UNIQUE_STATION')['TOTAL TRAFFIC', 'ENTRY DIFFS (ABS)', 'EXIT DIFFS (ABS)']
.sum()
# .sort_values(by='ENTRY DIFFS (ABS)', ascending=False)
# .head(10)
)
may_traffic_df.sort_values(by='TOTAL TRAFFIC', ascending=False).head(10)
top_30_stations = may_traffic_df.sort_values(by='TOTAL TRAFFIC', ascending=False).head(30)
top_30_stations
(top_30_stations['TOTAL TRAFFIC']
.plot(kind='barh',
color = 'green',
linewidth=10,
width=0.8,
align='center',
figsize =(16,10)
)
)
# alpha=0.5,color='y',rwidth=0.8)
plt.xlabel('Total Traffic per Station', fontsize=15)
plt.ylabel('Station Name', fontsize=15)
plt.title('Top 30 Busiest MTA Subway Stations - May 2017', fontsize=20)
plt.grid(True, axis='x')
plt.gca().invert_yaxis()
# plt.savefig('../img/top30.png', dpi=200, bbox_inches = 'tight')
###Output
_____no_output_____
###Markdown
5. Location, Location, Location: Enter the Goog Because we intend to compare station locations to locations of tech companies and universities, we need to grab the latitude and longitude of our stations, companies, and universities to compute that distance. We'll use the Google Maps API to query lat/lng for each one.
###Code
def find_location(search_array, search_type, gkey, lcn_dict=None):
"""
This test function will take in an array of searchable venues, look them up on the Google API, and return their
location as latitude and longitude.
Type key:
0. Subway Station
1. Company
2. College
---
IN: ndarray, list, etc. of locations; type (int); Google API key (str)
* specify dictionary for lcn_dict parameter if you'd like to add to it
OUT: dictionary of venues as keys and latitude/longitude as items; venues for which no data was found
"""
gmaps = googlemaps.Client(key=gkey)
no_value = []
search_types = ['subway station', 'company headquarters', 'college']
if lcn_dict == None:
lcn_dict = {}
for venue in search_array:
print(f"Searching for: {venue} \r", end='')
search_me = venue + search_types[search_type] + ' NEW YORK'
result = gmaps.geocode(search_me)
if result:
latitude = result[0]['geometry']['location']['lat']
longitude = result[0]['geometry']['location']['lng']
else:
latitude = np.nan
longitude = np.nan
print(f"*** {venue}: No value found!")
no_value.append(venue)
lcn_dict[venue] = (latitude, longitude)
print("Complete! ")
return lcn_dict, no_value
# comment code to load from JSON file later
# station_names = may_df['UNIQUE_STATION'].unique() # numpy array of unique names
# station_lcns, stations_nolcndata = find_location(station_names, 0, dl_google_key)
# for station in stations_nolcndata:
# print(station)
# cycles through stations with no location data and offers opportunity to rename
# both in list and df
# this is tricky because it increases the chance of mismatch between dataframes and location dictionaries
# which is why much of the renaming has been done at the beginning of the cleaning section
# for index, station in enumerate(stations_nolcndata[:]):
# print(f"Enter new name for {station} (or hit enter to leave as-is)")
# newname = input("> ").strip()
# if newname != '':
# may_df.loc[may_df['UNIQUE_STATION'] == station, 'UNIQUE_STATION'] = newname
# stations_nolcndata[index] = newname
# del station_lcns[station]
# for station in stations_nolcndata:
# print(station)
# try one more time with stations that didn't return lat/lng info
# station_lcns_updated, stations_nolcndata = find_location(stations_nolcndata, 0, dl_google_key, lcn_dict=station_lcns)
# station_lcns = station_lcns_updated
###Output
_____no_output_____
###Markdown
Google routinely returns some incorrect values for relevant stations — e.g. it returns the same lat/lng for every 23rd St station, which is the correct value for the 23rd St - NRW. Regretfully, we opt for the manual approach to correct these. (Source: http://web.mta.info/developers/data/nyct/subway/Stations.csv)
###Code
# station_lcns['23 ST - 1'] = [40.884667, -73.900870]
# station_lcns['23 ST - 6'] = [40.739864, -73.986599]
# station_lcns['23 ST - CE'] = [40.745906, -73.998041]
# station_lcns['23 ST - FM'] = [40.742878, -73.992821]
# station_lcns['TWENTY THIRD ST - 1'] = [40.742878, -73.992821]
# len(station_lcns)
# 50 notable tech companies in NYC
companies = ['Amazon','AOL','Apple','AppNexus','Bloomberg','Blue Apron 40 w 23rd St','BuzzFeed','ETrade','Etsy','Facebook',
'Fresh Direct','Google','Information Builders','LinkedIn','MediaRadar','Microsoft','Oscar',
'Salesforce','Shutterstock','Spotify','Tumblr','Twitter','VICE Media','WeWork','Yelp','Yext',
'Zocdoc','Betterment','Bonobos','Compass','Grubhub','Fareportal','FanDuel','MediaMath',
'Integral Ad Science','MondoDB','Sprinklr','Uber','Vimeo','Intersection','Snapchat','Flatiron Health',
'Gilt','LearnVest','OnDeck','Squarespace','Thrillist','Warby Parker','TMP Worldwide','Refinery29']
# company_lcns, companies_nolcndata = find_location(companies, 1, dl_google_key)
# 10 notable universities in NYC
colleges = ['City University of New York Harlem','Yeshiva University', 'Columbia University',
'New York Institute of Technology','New York University','Pace University', 'Fordham University',
'The New School','Hunter College','Baruch College']
# company_and_college_lcns, colleges_nolcndata = find_location(colleges, 2, dl_google_key, lcn_dict=company_lcns)
# throw these into JSON files for later
# with open('../json/station_lcns.txt', 'w') as fs:
# json.dump(station_lcns, fs)
# with open('../json/candc_lcns.txt', 'w') as fc:
# json.dump(company_and_college_lcns, fc)
# and then read them
with open('../json/station_lcns.txt', 'r') as fs:
station_lcns_load = json.load(fs)
with open('../json/candc_lcns.txt', 'r') as fc:
company_and_college_lcns_load = json.load(fc)
###Output
_____no_output_____
###Markdown
A quick check to make sure the counts are correct (stations should be 475, companies/colleges should be 60).
###Code
print(len(station_lcns_load))
print(len(company_and_college_lcns_load))
###Output
475
60
###Markdown
**FYI, latitude and longitude of NYC:** * Latitude = 40.7141667 * Longitude = -74.0063889 6. Intermission (and Pickling) As we've changed some names along the way and taken the time to pull a bunch of location data down from Google, we'll save the DF so we can pick back up here at any time while working on the rest.
###Code
# with open('../pickles/may_traffic_pre_merge.pkl', 'wb') as picklefile:
# pickle.dump(may_traffic_df, picklefile)
# with open('../pickles/may_traffic_pre_merge.pkl', 'rb') as picklefile:
# may_traffic_df = pickle.load(picklefile)
###Output
_____no_output_____
###Markdown
7. Merging Station Location with Traffic Turning Location Dicts into DFs I'm not sure why we didn't make these things into DFs or Series in the function, but now we take dictionaries of lat/lng and merge with the traffic info df using the station names as indices.
###Code
# if these don't load from JSON you'll need to change variables
station_locations = pd.DataFrame.from_dict(station_lcns_load, orient='index')
station_locations.columns = ['Latitude', 'Longitude']
company_college_locations = pd.DataFrame.from_dict(company_and_college_lcns_load, orient='index')
company_college_locations.columns = ['Latitude', 'Longitude']
station_locations = station_locations.reindex(station_locations.index.rename('UNIQUE_STATION'))
station_locations.head(10)
company_college_locations.head(10)
###Output
_____no_output_____
###Markdown
Joining Station Traffic and Location
###Code
may_traffic_df = may_traffic_df.join(station_locations)
may_traffic_df.sort_values(by='TOTAL TRAFFIC', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
8. Calculating Weight of Proximity of Station to Companies/Colleges Building Necessary Formulas Busy stations don't matter if their patrons wouldn't be interested in the event in question. The following illustrate two examples of how one might weight a station-company relationship based on distance. We have elected to use the former, given its simplicity and the fact that assumption plays heavily here, instead of concrete data concerning the likelihood of employees to use one station or another.
###Code
# a couple different ways of weighting a station based on proximity to companies/colleges
def find_weight1(dist):
"""Returns a number from 0-1 based on distance between station and company/college,
favoring closer distances and discounting anything farther than 1 mile away.
---
IN: Distance in miles (int or float)
OUT: Weight between 0 and 1 (int or float)
"""
if 0 < dist <= 0.25:
weight = 1
elif 0.25 < dist <= 0.5:
weight = 0.75
elif 0.5 < dist <= 0.75:
weight = 0.5
elif 0.75 < dist <= 1:
weight = 0.25
else:
weight = 0
return weight
def find_weight2(dist):
"""Returns a number from 0-1 based on a curve we thought might represent the
likelihood that someone at a particular company/college would use the
corresponding station. It is completely subjective and has no correlation with
any collected or observed data whatsoever, so justifying its use would be
difficult. Still, we were curious to see how it might work, so we coded it up
as an option.
---
IN: Distance in miles (int or float)
OUT: Weight between 0 and 1 (float)
"""
return 1 - (1 - 1/50**dist)**4
def calculate_distance(lat1, lng1, lat2, lng2):
"""Takes the latitude and longitude of two points, calculates distance, and returns
it in miles.
---
IN: Two pairs of lat/lng (int or float)
OUT: Distance between in miles (float)
"""
ll_1 = (lat1, lng1)
ll_2 = (lat2, lng2)
return vincenty(ll_1,ll_2).miles
def station_weight(s_lat, s_lng):
"""To be applied to the stations DF to find cumulative weighting for each
station based on proximity to companies/colleges of interest.
---
IN: DF row (series obj)
OUT: Cumulative weight (float)"""
# is there a better way to catch a nan?
if not s_lat > 0:
return np.nan
weight_list = []
for row in company_college_locations.iterrows():
c_lat = row[1][0]
c_lng = row[1][1]
dist = calculate_distance(s_lat, s_lng, c_lat, c_lng)
# change formula name to use
weight = find_weight1(dist)
weight_list.append(weight)
cumulative_weight = np.sum(weight_list)
return cumulative_weight
###Output
_____no_output_____
###Markdown
Generating Weights for Each Station
###Code
may_traffic_df['Weight'] = may_traffic_df.apply(lambda row: station_weight(row['Latitude'], row['Longitude']), axis=1)
may_traffic_df.sort_values(by="Weight", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Calculate "Relevance" of Each Station In order to find out which stations should be priorities for flyer distribution, we multiply the calculated weight by the total traffic of the station, and then divide by 10,000,000 to scale it to a readable number.
###Code
may_traffic_df['Relevance'] = (may_traffic_df['TOTAL TRAFFIC'] * may_traffic_df['Weight']) / 1e7
may_traffic_df.sort_values(by="Relevance", ascending=False).head(20)
dist1 = may_traffic_df['TOTAL TRAFFIC']
dist2 = may_traffic_df.Weight
s = may_traffic_df['Relevance']*60
plt.figure(figsize=(20,12))
plt.scatter(dist1, dist2, s=s)
# plt.xscale('log')
plt.grid(True, which='both')
plt.xticks([i*1e6 for i in range(10)], [str(i)+'M' for i in range(10)])
plt.xlabel('Total Station Traffic', fontsize = 15)
plt.ylabel('Weight of Proximity to Tech Companies & Universities',fontsize = 15)
plt.title('Scatterplot of Station Traffic & Proximity Weight',fontsize = 20);
# plt.savefig('../img/scatter1.png', dpi=200, bbox_inches='tight')
top_30_important_stations = may_traffic_df.sort_values(by="Relevance", ascending=False).head(30)
top_30_important_stations
(top_30_important_stations['Relevance']
.plot(kind='barh',
color = 'red',
linewidth=10,
width=0.8,
align='center',
figsize =(16,10)
)
)
plt.xlabel('Station Relevance Score', fontsize = 15)
plt.ylabel('Station Name', fontsize = 15)
plt.title('30 Most Relevant Subway Stations', fontsize = 20)
plt.xticks([i for i in range(14)])
plt.grid(True, axis='x')
plt.gca().invert_yaxis();
plt.savefig('../img/top30relevant.png', dpi=200, bbox_inches = 'tight')
top_10_important_station = may_traffic_df.sort_values(by="Relevance", ascending=False).head(10)
top_10_important_station
dist1 = top_10_important_station['TOTAL TRAFFIC']
dist2 = top_10_important_station.Weight
n = top_10_important_station.index
labels = [i for i in n]
s = top_10_important_station['Relevance']**3.5
plt.figure(figsize=(24, 16))
plt.scatter(dist1, dist2, s=s, color = 'blue')
for label, x, y in zip(labels, dist1, dist2):
plt.annotate(
label,
xy=(x, y), xytext=(-10, 30),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.6', fc='pink', alpha=0.6),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
plt.xlabel('Total Traffic Per Station - May 2017',fontsize = 20)
plt.ylabel('Weight of Proximity to Tech Companies & Universities', fontsize = 20)
plt.title('Top 10 Most Relevant Subway Stations', fontsize = 30);
# plt.savefig('scatter2.png', dpi=200, bbox_inches = 'tight')
###Output
_____no_output_____ |
9_30PST_Copy_of_latest_sun_mar_01_unit_2_sprint_3_Retake_1.ipynb | ###Markdown
_Lambda School Data Science, Unit 2_ Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍕 For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." Your challenge: Predict whether inspections failedThe target is the `Fail` column.- When the food establishment failed the inspection, the target is `1`.- When the establishment passed, the target is `0`. Run this cell to install packages in Colab:
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
###Output
_____no_output_____
###Markdown
Run this cell to load the data:
###Code
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
###Output
_____no_output_____
###Markdown
Part 1: PreprocessingYou may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding._To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ Part 2: Modeling**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ Part 3: VisualizationMake visualizations for model interpretation. (You may use any libraries.) Choose two of these types:- Confusion Matrix- Permutation Importances- Partial Dependence Plot, 1 feature isolation- Partial Dependence Plot, 2 features interaction- Shapley Values_To earn a score of 3 for this part, make four of these visualization types._ Part 1: Preprocessing> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
###Code
# Exploratory Data Analyses
train.describe()
test.describe()
train.head()
test.head()
# check Nulls
train.isnull().sum()
test.isnull().sum()
# check 'Fail' class imbalance via a Plot: Pass Vs Fail
(train['Fail'].map({0: 'Passed', 1: 'Failed'}).value_counts(normalize=True) * 100)\
.plot.bar(title='Percentages of Inspection Results', figsize=(10, 5))
# Drop 'AKA Name' column in train set since I will use the "DBA Name" column and the former has nulls while the latter does not,
# and both serve similar enough functions as a business identifier for my purposes
train = train.drop(columns=['AKA Name'])
y = train['Fail']
y.unique()
'''
Which evaluation measure is appropriate to use for a classification model with imbalanced classes?
Precision metric tells us how many predicted samples are relevant i.e. our mistakes into classifying sample as a correct one if it's not true. this metric is a good choice for the imbalanced classification scenario.May 9, 2019
Metrics for Imbalanced Classification - Towards Data Science
'''
# May use PRECISION METRIC? (instead of Accuracy in ntbk) for validation because our 2 class ratio is about 3:1; ~significant imbalance
# TEST INSTRUCTION: estimate your ROC AUC validation score
# find how many of Pass and Failed in our train['Fail']
y.value_counts(normalize=True)
import pandas as pd
# from LS_DSPT4_231.ipynb (Mod 1)
'''
Next, do a time-based split:
Brief Description: This dataset contains information from inspections of restaurants and other
food establishments in Chicago from January 1, 2010 to the present.
'''
train['Inspection Date'] = pd.to_datetime(train['Inspection Date'])
# TRIED to split val from train, but got AttributeError: Can only use .dt accessor with datetimelike values..
# may have to feature engineer Inspection Date to parse out only date!
# Attempt 2: Parsing out only YEAR from train['Inspection Date']- works!
train['Inspection Date'] = pd.to_datetime(train['Inspection Date'])
train['Inspection Year'] = train['Inspection Date'].dt.year
# Parsing out only YEAR from test['Inspection Date']- works!
test['Inspection Date'] = pd.to_datetime(test['Inspection Date'])
test['Inspection Year'] = test['Inspection Date'].dt.year
# split_train = train[train['Inspection Date'].dt.year <= 2016]
# val = train[train['Inspection Date'].dt.year > 2017]
# Check if ~80 % train; 20% val split was chosen
#split_train.shape, val.shape
# May fine tune split using months additionally
'''
val.value_counts(normalize=True) (gives err df obj has no val_cnts attrib..)
# check 'Fail' class imbalance via a Plot: Pass Vs Fail
# ?!?!
# (train_split['Inspection Year'].map({ ('Inspection Year'<= 2016): 'train_split', 1: 'Failed'}).value_counts(normalize=True) * 100)\
# .plot.bar(title='Percentages of Inspection Results', figsize=(10, 5))
'''
#Feature Engineer 'Any Failed' column to get ROC score >=.6
train['Any Failed'] = train.groupby('Address')['Fail'].transform(lambda x: int((x == 1).any()))
test['Any Failed'] = test.groupby('Address')['Fail'].transform(lambda x: int((x == 1).any()))
###Output
_____no_output_____
###Markdown
Part 2: Modeling> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.>> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
###Code
#ATTEMPT 2: getting invalid type promotion err
# Try a shallow decision tree as a fast, first model
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
target = 'Fail'
# features = ['Inspection Type', 'Any Failed', 'Facility Type', 'Latitude', 'Longitude']
features = ['Inspection Type', 'Zip', 'Any Failed', 'License #', 'Facility Type', 'Latitude', 'Longitude']
X_train, X_test, y_train, y_test = train_test_split(train[features], train[target])
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier()
)
pipeline.fit(X_train, y_train)
acc_score = pipeline.score(test[features], test[target])
ra_score = roc_auc_score(test[target], pipeline.predict(test[features]))
print(f'Test Accuracy: {pipeline.score(X_test, y_test)}')
print(f'Test ROC AUC: {roc_auc_score(y_test, pipeline.predict(X_test))}\n')
print(f'Val Accuracy: {acc_score}')
print(f'Val ROC AUC: {ra_score}')
###Output
Test Accuracy: 0.7464365513521843
Test ROC AUC: 0.6447748355312658
Val Accuracy: 0.8168843175777187
Val ROC AUC: 0.7098083334312979
###Markdown
Part 3: Visualization> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:>> - Permutation Importances> - Partial Dependence Plot, 1 feature isolation> - Partial Dependence Plot, 2 features interaction> - Shapley Values
###Code
#Perm Impt: https://colab.research.google.com/drive/1z1R0m3XsaZMjukynx2Ub-531Sh32xPln#scrollTo=QxhmJFxvKDbM (u2s3m3)
# 1) PERMUTATION IMPORTANCES
# a) just to peek at which features are important to our model, get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
# BEFORE PERMUTING: Sequence of the feature to be permuted: from Features Importance above, chose Latitude adn Inspection type columns/features to Permute
import numpy as np
for feature in ['Latitude', 'Inspection Type']:
# PERMUTE LATITUDE AND INSPECTION TYPE FEATURES
X_train_permuted = X_train.copy() #copy whole df to submit all at once
X_train_permuted[feature] = np.random.permutation(X_train[feature])
X_test_permuted = X_test.copy()
X_test_permuted[feature] = np.random.permutation(X_test[feature])
score = pipeline.score(X_test, y_test)
score_permuted = pipeline.score(X_test_permuted, y_test) #Calc. accuracy on the permuted val dataset
print(f'Validation accuracy with {feature}: {score}')
print(f'Validation accuracy with {feature} permuted: {score_permuted}')
print(f'Permutation importance: {acc_score - score_permuted}\n')
#2) Shapley Values: SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature.
# from https://colab.research.google.com/drive/1r2VFMtBAt3sLVIQfsMWyQXt8hB9gziRA#scrollTo=Ep1aBVpVcrDj (FINAL VERSION 234 u2s3m4.ipynb)
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from xgboost import XGBClassifier
processor = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median')
)
val = train[train['Inspection Date'].dt.year > 2017]
X_val = val[features]
y_val = val[target]
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model = XGBClassifier(n_estimators=1000, n_jobs=-1)
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc',
early_stopping_rounds=10)
from sklearn.metrics import roc_auc_score
X_test_processed = processor.transform(X_test)
class_index = 1
y_pred_proba = model.predict_proba(X_test_processed)[:, class_index] #get predicted probabilities instead of predicted class; if call on model directly, it predicts the class, not the probability
print(f'Test ROC AUC for class {class_index}:')
print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better
import shap
# Get an individual observation to explain.
# For example, the 0th row from the test set.
row = X_test.iloc[[0]]
row
explainer = shap.TreeExplainer(model) # class instance of Shap; tree-based explainer; gradient boosting
row_processed = processor.transform(row) # pre-process this row's raw data
shap_values = explainer.shap_values(row_processed) # calculate that row's shap values
# Use force plot to visualize Shapley values
shap.initjs() #initialize javascript
shap.force_plot(
base_value=explainer.expected_value, # minimum expectation our model has of any given prediction
shap_values=shap_values, # individual values for our row; what exactly are we predicting?
features=row, # what are the values of the features we are predicting from?
link='logit' # For classification, this shows predicted probabilities (may only work for binary classification)
)
explainer.expected_value, y_train.mean()
###Output
_____no_output_____ |
notebooks/07_model_tree_rf.ipynb | ###Markdown
set model parameters and capture data
###Code
inputs = os.path.join('..', 'data', '03_processed')
models_reports = os.path.join('..', 'data', '04_models')
model_outputs = os.path.join('..', 'data', '05_model_output')
reports = os.path.join('..', 'data', '06_reporting')
X_train = pd.read_csv(os.path.join(inputs, 'X_train.csv'), index_col='id')
X_train_onehot = pd.read_csv(os.path.join(inputs, 'X_train_onehot.csv'), index_col='id')
y_train = pd.read_csv(os.path.join(inputs, 'y_train.csv'), index_col='id')
data_list = [X_train, X_train_onehot, y_train]
for df in data_list:
print(df.shape)
###Output
(354, 14)
(354, 14)
(354, 1)
###Markdown
Machine Learning
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
model_type = 'tree_rf'
ml_dict = {}
columns = X_train.columns.to_list()
scoring = 'neg_mean_squared_error'
# Specify the hyperparameter space
parameters = {'model__max_features':[1, 2, "auto", "log2", None],
'model__n_estimators':[100, 200, 300],
'model__n_estimators':[10], # it allows faster tests on pipeline
'model__random_state':[42]}
ml_model = RandomForestRegressor()
do_transform_label = None
ml_dict[model_type] = {}
clf, ml_dict[model_type]['train_time'], ml_dict[model_type]['prediction_time'] = apply_ml_model(
X_train, y_train, columns, ml_model, parameters, scoring,
do_build_polynomals=False, do_transform_label=do_transform_label,
do_treat_skewness=False,
imputation=Imputer(strategy='median'), scaler=None, smote=False,
testing=True)
ml_dict[model_type]['best_params'], ml_dict[model_type]['best_score'] = get_model_params(clf, scoring)
ml_dict[model_type]['columns'] = columns
print('RESULTS FOR TREE MODEL')
pprint(ml_dict)
###Output
RESULTS FOR TREE MODEL
{'tree_rf': {'best_params': {'model__max_features': 'auto',
'model__n_estimators': 10,
'model__random_state': 42},
'best_score': 10.936919215291752,
'columns': ['crim',
'zn',
'indus',
'chas',
'nox',
'rm',
'age',
'dis',
'rad',
'tax',
'ptratio',
'b',
'lstat',
'if_anomaly'],
'prediction_time': 0.0003,
'train_time': 0.819009}}
###Markdown
save model parameters and metrics
###Code
save_model_parameters(models_reports, model_type, clf)
save_model_metrics(model_outputs, model_type, ml_dict)
###Output
_____no_output_____
###Markdown
set model parameters and capture data
###Code
inputs = os.path.join('..', 'data', '03_processed')
models_reports = os.path.join('..', 'data', '04_models')
model_outputs = os.path.join('..', 'data', '05_model_output')
reports = os.path.join('..', 'data', '06_reporting')
X_train = pd.read_csv(os.path.join(inputs, 'X_train.csv'), index_col='id')
X_train_onehot = pd.read_csv(os.path.join(inputs, 'X_train_onehot.csv'), index_col='id')
y_train = pd.read_csv(os.path.join(inputs, 'y_train.csv'), index_col='id')
data_list = [X_train, X_train_onehot, y_train]
for df in data_list:
print(df.shape)
###Output
(4930, 20)
(4930, 26)
(4930, 1)
###Markdown
Machine Learning
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
model_type = 'tree_rf'
ml_dict = {}
###Output
_____no_output_____
###Markdown
def get_model_params(classifier): if target_type == 'regression': classifier.best_score_ = -classifier.best_score_ return classifier.best_params_, classifier.best_score_
###Code
columns = X_train.columns.to_list()
scoring = 'f1'
# Specify the hyperparameter space
parameters = {'model__max_features':[1, 2, "auto", "log2", None],
'model__n_estimators':[100, 200, 300],
'model__n_estimators':[10], # it allows faster tests on pipeline
'model__random_state':[42]}
# ml_model = RandomForestRegressor()
ml_model = RandomForestClassifier()
do_transform_label = None
# df_x = dfs_dict['X_train']
# df_y = dfs_dict['y_train']
# key = 'standard'
ml_dict[model_type] = {}
clf, ml_dict[model_type]['train_time'], ml_dict[model_type]['prediction_time'] = apply_ml_model(
X_train, y_train, columns, ml_model, parameters, scoring,
do_build_polynomals=False, do_transform_label=do_transform_label,
do_treat_skewness=False,
imputation=Imputer(strategy='median'), scaler=None, smote=False,
testing=True)
ml_dict[model_type]['best_params'], ml_dict[model_type]['best_score'] = get_model_params(clf, scoring)
ml_dict[model_type]['columns'] = columns
print('RESULTS FOR TREE MODEL')
pprint(ml_dict)
###Output
RESULTS FOR TREE MODEL
{'tree_rf': {'best_params': {'model__max_features': 'auto',
'model__n_estimators': 10,
'model__random_state': 42},
'best_score': 0.5091648828589008,
'columns': ['gender_male',
'seniorcitizen',
'partner',
'dependents',
'tenure',
'phoneservice',
'multiplelines',
'internetservice',
'onlinesecurity',
'onlinebackup',
'deviceprotection',
'techsupport',
'streamingtv',
'streamingmovies',
'contract',
'paperlessbilling',
'paymentmethod',
'monthlycharges',
'totalcharges',
'if_anomaly'],
'prediction_time': 0.0003001,
'train_time': 1.859938}}
###Markdown
save model parameters and metrics
###Code
save_model_parameters(models_reports, model_type, clf)
save_model_metrics(model_outputs, model_type, ml_dict)
###Output
_____no_output_____
###Markdown
set model parameters and capture data
###Code
inputs = os.path.join('..', 'data', '03_processed')
models_reports = os.path.join('..', 'data', '04_models')
model_outputs = os.path.join('..', 'data', '05_model_output')
reports = os.path.join('..', 'data', '06_reporting')
X_train = pd.read_csv(os.path.join(inputs, 'X_train.csv'), index_col='id')
X_train_onehot = pd.read_csv(os.path.join(inputs, 'X_train_onehot.csv'), index_col='id')
y_train = pd.read_csv(os.path.join(inputs, 'y_train.csv'), index_col='id')
data_list = [X_train, X_train_onehot, y_train]
for df in data_list:
print(df.shape)
###Output
(7000, 46)
(7000, 150)
(7000, 1)
###Markdown
Machine Learning
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
model_type = 'tree_rf'
ml_dict = {}
###Output
_____no_output_____
###Markdown
def get_model_params(classifier): if target_type == 'regression': classifier.best_score_ = -classifier.best_score_ return classifier.best_params_, classifier.best_score_
###Code
columns = X_train.columns.to_list()
scoring = 'f1'
# Specify the hyperparameter space
parameters = {'model__max_features':[1, 2, "auto", "log2", None],
'model__n_estimators':[100, 200, 300],
'model__n_estimators':[10], # it allows faster tests on pipeline
'model__random_state':[42]}
# ml_model = RandomForestRegressor()
ml_model = RandomForestClassifier()
do_transform_label = None
# df_x = dfs_dict['X_train']
# df_y = dfs_dict['y_train']
# key = 'standard'
ml_dict[model_type] = {}
clf, ml_dict[model_type]['train_time'], ml_dict[model_type]['prediction_time'] = apply_ml_model(
X_train, y_train, columns, ml_model, parameters, scoring,
do_build_polynomals=False, do_transform_label=do_transform_label,
do_treat_skewness=False,
imputation=Imputer(strategy='median'), scaler=None, smote=False,
testing=True)
ml_dict[model_type]['best_params'], ml_dict[model_type]['best_score'] = get_model_params(clf, scoring)
ml_dict[model_type]['columns'] = columns
print('RESULTS FOR TREE MODEL')
pprint(ml_dict)
###Output
RESULTS FOR TREE MODEL
{'tree_rf': {'best_params': {'model__max_features': None,
'model__n_estimators': 10,
'model__random_state': 42},
'best_score': 0.29783632833450374,
'columns': ['transactiondt',
'transactionamt',
'productcd',
'card1',
'card2',
'card3',
'card4',
'card5',
'card6',
'addr1',
'addr2',
'dist1',
'p_emaildomain',
'r_emaildomain',
'c1',
'c2',
'c3',
'c4',
'c5',
'c6',
'c7',
'c8',
'c9',
'c10',
'c11',
'c12',
'c13',
'c14',
'd1',
'd2',
'd3',
'd4',
'd5',
'd10',
'd11',
'd15',
'm1',
'm2',
'm3',
'm4',
'm5',
'm6',
'm7',
'm8',
'm9',
'if_anomaly'],
'prediction_time': 0.0009012,
'train_time': 16.721983}}
###Markdown
save model parameters and metrics
###Code
save_model_parameters(models_reports, model_type, clf)
save_model_metrics(model_outputs, model_type, ml_dict)
###Output
_____no_output_____ |
01 Machine Learning/scikit_examples_jupyter/cluster/plot_agglomerative_clustering.ipynb | ###Markdown
Agglomerative clustering with and without structure===================================================This example shows the effect of imposing a connectivity graph to capturelocal structure in the data. The graph is simply the graph of 20 nearestneighbors.Two consequences of imposing a connectivity can be seen. First clusteringwith a connectivity matrix is much faster.Second, when using a connectivity matrix, single, average and completelinkage are unstable and tend to create a few clusters that grow veryquickly. Indeed, average and complete linkage fight this percolation behaviorby considering all the distances between two clusters when merging them (while single linkage exaggerates the behaviour by considering only theshortest distance between clusters). The connectivity graph breaks thismechanism for average and complete linkage, making them resemble the morebrittle single linkage. This effect is more pronounced for very sparse graphs(try decreasing the number of neighbors in kneighbors_graph) and withcomplete linkage. In particular, having a very small number of neighbors inthe graph, imposes a geometry that is close to that of single linkage,which is well known to have this percolation instability.
###Code
# Authors: Gael Varoquaux, Nelle Varoquaux
# License: BSD 3 clause
import time
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cluster import AgglomerativeClustering
from sklearn.neighbors import kneighbors_graph
# Generate sample data
n_samples = 1500
np.random.seed(0)
t = 1.5 * np.pi * (1 + 3 * np.random.rand(1, n_samples))
x = t * np.cos(t)
y = t * np.sin(t)
X = np.concatenate((x, y))
X += .7 * np.random.randn(2, n_samples)
X = X.T
# Create a graph capturing local connectivity. Larger number of neighbors
# will give more homogeneous clusters to the cost of computation
# time. A very large number of neighbors gives more evenly distributed
# cluster sizes, but may not impose the local manifold structure of
# the data
knn_graph = kneighbors_graph(X, 30, include_self=False)
for connectivity in (None, knn_graph):
for n_clusters in (30, 3):
plt.figure(figsize=(10, 4))
for index, linkage in enumerate(('average',
'complete',
'ward',
'single')):
plt.subplot(1, 4, index + 1)
model = AgglomerativeClustering(linkage=linkage,
connectivity=connectivity,
n_clusters=n_clusters)
t0 = time.time()
model.fit(X)
elapsed_time = time.time() - t0
plt.scatter(X[:, 0], X[:, 1], c=model.labels_,
cmap=plt.cm.nipy_spectral)
plt.title('linkage=%s\n(time %.2fs)' % (linkage, elapsed_time),
fontdict=dict(verticalalignment='top'))
plt.axis('equal')
plt.axis('off')
plt.subplots_adjust(bottom=0, top=.89, wspace=0,
left=0, right=1)
plt.suptitle('n_cluster=%i, connectivity=%r' %
(n_clusters, connectivity is not None), size=17)
plt.show()
###Output
_____no_output_____ |
DataCamp_Sharpe_Ratio.ipynb | ###Markdown
**Project Description**---An investment may make sense if we expect it to return more money than it costs. But returns are only part of the story because they are risky - there may be a range of possible outcomes. How does one compare different investments that may deliver similar results on average, but exhibit different levels of risks?Enter William Sharpe. He introduced the reward-to-variability ratio in 1966 that soon came to be called the Sharpe Ratio. It compares the expected returns for two investment opportunities and calculates the additional return per unit of risk an investor could obtain by choosing one over the other. In particular, it looks at the difference in returns for two investments and compares the average difference to the standard deviation (as a measure of risk) of this difference. A higher Sharpe ratio means that the reward will be higher for a given amount of risk. It is common to compare a specific opportunity against a benchmark that represents an entire category of investments.The Sharpe ratio has been one of the most popular risk/return measures in finance, not least because it's so simple to use. It also helped that Professor Sharpe won a Nobel Memorial Prize in Economics in 1990 for his work on the capital asset pricing model (CAPM).Let's learn about the Sharpe ratio by calculating it for the stocks of the two tech giants Facebook and Amazon. As a benchmark, we'll use the S&P 500 that measures the performance of the 500 largest stocks in the US.
###Code
# Importing required modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Settings to produce nice plots in a Jupyter notebook
plt.style.use('fivethirtyeight')
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
# Reading in the data
stock_data = pd.read_csv('drive/MyDrive/DataCamp_RiskReturn/stock_data.csv', parse_dates = True,
index_col = ['Date']).dropna()
benchmark_data = pd.read_csv('drive/MyDrive/DataCamp_RiskReturn/benchmark_data.csv', parse_dates =
True, index_col = ['Date']).dropna()
stock_data
stock_data.info()
benchmark_data
benchmark_data.info()
# Display summary for stock_data
print('Stocks\n')
# ... YOUR CODE FOR TASK 2 HERE ...
stock_data.info()
print(stock_data.head())
# Display summary for benchmark_data
print('\nBenchmarks\n')
# ... YOUR CODE FOR TASK 2 HERE ...
benchmark_data.info()
print(benchmark_data.head())
# visualize the stock_data
stock_data.plot(subplots = True, title = 'Stock Data');
benchmark_data.plot(title = 'S&P 500')
###Output
_____no_output_____
###Markdown
**The inputs for the Sharpe Ratio: Starting with Daily Stock Returns**The Sharpe Ratio uses the difference in returns between the two investment opportunities under consideration.However, our data show the historical value of each investment, not the return. To calculate the return, we need to calculate the percentage change in value from one day to the next. We'll also take a look at the summary statistics because these will become our inputs as we calculate the Sharpe Ratio.
###Code
# calculate daily stock_data returns
stock_returns = stock_data.pct_change()
# plot the daily returns
# ... YOUR CODE FOR TASK 5 HERE ...
stock_returns.plot()
# summarize the daily returns
# ... YOUR CODE FOR TASK 5 HERE ...
stock_returns.describe()
# calculate daily benchmark_data returns
sp_returns = benchmark_data['S&P 500'].pct_change()
# plot the daily returns
sp_returns.plot()
# summarize the daily returns
sp_returns.describe()
# calculate the difference in daily returns
excess_returns = stock_returns.sub(sp_returns, axis = 0)
# plot the excess_returns
excess_returns.plot();
# summarize the excess_returns
excess_returns.describe()
###Output
_____no_output_____
###Markdown
**The Sharpe Ratio, Step 1: The Average Difference in Daily Returns Stocks vs S&P 500**Now we can finally start computing the Sharpe Ratio. First we need to calculate the average of the excess_returns. This tells us how much more or less the investment yields per day compared to the benchmark.
###Code
# calculate the mean of excess_returns
avg_excess_return = excess_returns.mean()
# plot avg_excess_returns
avg_excess_return.plot.bar(title = 'Mean of the Return Difference');
###Output
_____no_output_____
###Markdown
**The Sharpe Ratio, Step 2: Standard Deviation of the Return Difference**It looks like there was quite a bit of a difference between average daily returns for Amazon and Facebook.Next, we calculate the standard deviation of the excess_returns. This shows us the amount of risk an investment in the stocks implies as compared to an investment in the S&P 500.
###Code
# calculate the standard deviations
sd_excess_return = excess_returns.std()
# plot the standard deviations
sd_excess_return.plot.bar(title =
'Standard Deviation of the Return Difference');
# calculate the daily sharpe ratio
daily_sharpe_ratio = avg_excess_return.div(sd_excess_return)
# annualize the sharpe ratio
annual_factor = np.sqrt(252)
annual_sharpe_ratio = daily_sharpe_ratio.mul(annual_factor)
# plot the annualized sharpe ratio
annual_sharpe_ratio.plot.bar(title = 'Annualized Sharpe Ratio: Stocks vs S&P 500');
###Output
_____no_output_____
###Markdown
**Conclusion**---Given the two Sharpe ratios, which investment should we go for? In 2016, Amazon had a Sharpe ratio twice as high as Facebook. This means that an investment in Amazon returned twice as much compared to the S&P 500 for each unit of risk an investor would have assumed. In other words, in risk-adjusted terms, the investment in Amazon would have been more attractive.This difference was mostly driven by differences in return rather than risk between Amazon and Facebook. The risk of choosing Amazon over FB (as measured by the standard deviation) was only slightly higher so that the higher Sharpe ratio for Amazon ends up higher mainly due to the higher average daily returns for Amazon.When faced with investment alternatives that offer both different returns and risks, the Sharpe Ratio helps to make a decision by adjusting the returns by the differences in risk and allows an investor to compare investment opportunities on equal terms, that is, on an 'apples-to-apples'
###Code
###Output
_____no_output_____ |
questions/6_Logic_ControlFlow_1.ipynb | ###Markdown
Logic & Control Flow Challenge Problem Password StrengthHere you will write a function that determines if a password is safe or not. The password will be considered strong enough if its length is greater than or equal to 10 symbols, it has at least one digit, as well as containing one uppercase letter and one lowercase letter in it. The password contains only ASCII latin letters or digits. Input: A password as a string. Output: Boolean value indicating if the password is safe or not.
###Code
## Fill in this cell with your function
def password_strength(password):
# To run checks on each character in the password we have to use a for loop
# We will learn more about loops next week
for character in password:
# perform checks on the variable password
## Run this cell to check your function
## If your function passes all tests there will be no output
## If there is a problem, python will raise an AssertionError
## Check the error message to see which example is throwing the error
assert password_strength("YaaasQueen18") == True, "Failed Check 1"
assert password_strength("password") == False, "Failed Check 2"
assert password_strength('123456123456') == False, "Failed Check 3"
assert password_strength('QwErTy911poqqqq') == True, "Failed Check 4"
assert password_strength('A1213pokl') == False, "Failed Check 5"
###Output
_____no_output_____ |
day 9 ass 2.ipynb | ###Markdown
Generator program for returning armstrong numbers in between 1-1000 in a generator object.
###Code
def armstrong(num):
for x in range(1,num):
if x>10:
order = len(str(x))
sum = 0
temp = x
while temp > 0:
digit = temp % 10
sum += digit ** order
temp //= 10
if x == sum:
yield print("The First Armstrong Nubmer is : ", x)
lst = list(armstrong(1000))
###Output
The First Armstrong Nubmer is : 153
The First Armstrong Nubmer is : 370
The First Armstrong Nubmer is : 371
The First Armstrong Nubmer is : 407
|
module4-logistic-regression/Lecture_4_notes.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression- do train/validate/test split- begin with baselines for classification- express and explain the intuition and interpretation of Logistic Regression- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression modelsLogistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). Wrangle
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
train = pd.read_csv(DATA_PATH+'titanic/train.csv', index_col='PassengerId')
test = pd.read_csv(DATA_PATH+'titanic/test.csv', index_col='PassengerId')
###Output
_____no_output_____
###Markdown
EDA
###Code
print(train.shape) # <-- This has the target vector
print(test.shape) # <-- This does not
train.select_dtypes('object').nunique()
def wrangle(df):
df = df.copy()
df.drop(columns=['Name', 'Ticket', 'Cabin'], inplace=True)
return df
train = wrangle(train)
test = wrangle(test)
train.head()
train['Survived'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Split Data Split our TV from our FM
###Code
target = 'Survived'
y = train[target]
X = train.drop(columns=target)
###Output
_____no_output_____
###Markdown
Split our training set into training and **validation** setsWe already have a test set. However, since we can only use the test set to assess model performance _after_ we've completed the model, we need data that we can use to _estimate_ test error, data that will allow us to tune our model.This is an **cross-validation** strategy. We'll discuss other strategies later on.
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Establish Baseline- **Q:** Is this a **regresion** or a **classification** problem?- **A:** Classification, so our baseline metric is going to be `accuracy_score`.
###Code
# What is your majority class?
print('Baseline accuracy:', y_train.value_counts(normalize=True).max())
###Output
Baseline accuracy: 0.6235955056179775
###Markdown
Build Our `LinearRegression` Model Two issues:- Categorical features that need to be encoded- Missing values that need to be **imputed**.
###Code
from category_encoders import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
model_lin = make_pipeline(
OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler(),
LinearRegression()
)
model_lin.fit(X_train, y_train);
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
**Question:** How can we access parts of our pipeline? Like if we want to see what our transformed data looks like?
###Code
example = make_pipeline(
OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler()
)
XT_train = example.fit_transform(X_train)
# Use `named_steps`
col_names = example.named_steps['onehotencoder'].get_feature_names()
pd.DataFrame(XT_train, columns=col_names).head()
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Check `LinearRegression` Metrics
###Code
# from sklearn.metrics import accuracy_score
print('Training Accuracy Score:', model_lin.score(X_train, y_train))
print('Validation Accuracy Score:', model_lin.score(X_val, y_val))
###Output
Training Accuracy Score: 0.38351734071668986
Validation Accuracy Score: 0.44433715476781926
###Markdown
Why is our `LinearRegression` model so bad?
###Code
import matplotlib.pyplot as plt
plt.scatter(X_train['Age'], y_train)
plt.xlabel('Age')
plt.ylabel('Survived');
###Output
_____no_output_____
###Markdown
Build Out `LogisticRegression` Model
###Code
model_log = make_pipeline(
OneHotEncoder(use_cat_names=True),
SimpleImputer(),
StandardScaler(),
LogisticRegression()
)
model_log.fit(X_train, y_train);
###Output
/usr/local/lib/python3.6/dist-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
Check `LogisiticRegression` Metrics
###Code
print('Training Accuracy Score:', model_log.score(X_train, y_train))
print('Validation Accuracy Score:', model_log.score(X_val, y_val))
###Output
Training Accuracy Score: 0.8019662921348315
Validation Accuracy Score: 0.8100558659217877
###Markdown
Make Predictions
###Code
# Make predictions
y_pred = model_log.predict(test)
# Put predictions into a DataFrame
submission = pd.DataFrame(y_pred, columns=['Survived'], index=test.index)
# Make a CSV file
submission.to_csv('2020-10-01_submission.csv')
###Output
_____no_output_____ |
notebooks/Super_Resolution/remove_text/Upsampling/ellwlb/CBSD68/CBSD68_Pre-Upsampling_Convolutional.ipynb | ###Markdown
Settings
###Code
%env TF_KERAS = 1
import os
sep_local = os.path.sep
print(sep_local)
print(os.getcwd())
import sys
os.chdir('..' + sep_local +'..' + sep_local +'..' + sep_local + '..' + sep_local + '..' + sep_local + '..') # For Linux import
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Dataset loading
###Code
dataset_name='CBSD68'
images_dir = '.' + sep_local + 'data' + sep_local + '.CBSD68'
validation_percentage = 10
valid_format = 'png'
images_dir
from training.generators.file_image_generator import create_image_lists, get_generators
imgs_list = create_image_lists(
image_dir=images_dir,
validation_pct=validation_percentage,
valid_imgae_formats=valid_format
)
from PIL import Image
trace_image = Image.open(images_dir+sep_local+'original'+sep_local+'{:04d}.png'.format(66))
trace_image
#image = np.asarray(trace_image)/255.0
#timage = put_random_text(image)
#Image.fromarray((timage * 255.0).astype(np.uint8), mode='RGB')
image_size_original=(481, 321, 3)
scale = 2
image_size = list(map(lambda x: x//scale , image_size_original[:-1])) + [image_size_original[-1]]
image_size = (*image_size,)
batch_size = 16
latents_dim = 150
intermediate_dim = 50
image_size
training_generator, testing_generator = get_generators(
images_list=imgs_list,
image_dir=images_dir,
image_size=image_size,
batch_size=batch_size,
class_mode=None
)
###Output
_____no_output_____
###Markdown
input is half of the output
###Code
scale=1
inputs_shape = list(map(lambda x: x//scale , image_size[:-1])) + [image_size[-1]]
inputs_shape = (*inputs_shape, )
image_size, inputs_shape
shrink_fn = lambda image: tf.image.resize(image, inputs_shape[:-1])
enlarge_fn = lambda image: tf.image.resize(image, image_size[:-1])
import numpy as np
import cv2
import random
import string
characters = string.ascii_letters+string.ascii_letters+string.ascii_letters+string.digits+string.punctuation
word_len = range(10)
def random_sentance_generator():
sentance = ''
rn = random.choice(range(1, 20))
for i in range(rn):
sentance = sentance + ' ' + ''.join(random.choice(characters) for i in word_len)
return sentance
random_sentance_generator()
fonts = list(set([eval('cv2.{}'.format(i)) for i in dir(cv2) if i.startswith('FONT_')]))
def put_random_text(image):
image_dim = image.shape[:-1]
#scale=1
image_cv = cv2.cvtColor((image * 255).astype(np.uint8), cv2.IMREAD_COLOR)
#image_cv = cv2.resize( image_cv, (image_dim[0]*scale, image_dim[1]*scale))
for _ in range(5):
x_loc = random.choice(range(image_cv.shape[0]))
y_loc = random.choice(range(image_cv.shape[1]))
loc = (x_loc, y_loc)
cv2.putText(img=image_cv,
text=random_sentance_generator(),
org=loc,
fontFace=random.choice(fonts),
fontScale=random.choice(range(1, 3)),
color=(random.choice(range(256)), random.choice(range(256)), random.choice(range(256))),
thickness=random.choice(range(3))
)
image_cv = np.asarray(Image.fromarray(np.asarray(cv2.resize(image_cv, image_dim)).astype(np.uint8), mode='RGB')\
.rotate(90, expand=True))/255.0
return image_cv
def put_text_fn(images):
if len(images.shape)<4:
images = [images]
return np.array([put_random_text(image) for image in images])
def generator_text_putter(generator):
while True:
batch = next(generator)
yield put_text_fn(batch), batch
train_ds = tf.data.Dataset.from_generator(
lambda: generator_text_putter(training_generator),
output_types= (tf.float32, tf.float32),
output_shapes=(tf.TensorShape((batch_size, ) + inputs_shape), tf.TensorShape((batch_size, ) + image_size)),
)
test_ds = tf.data.Dataset.from_generator(
lambda: generator_text_putter(testing_generator),
output_types= (tf.float32, tf.float32),
output_shapes=(tf.TensorShape((batch_size, ) + inputs_shape), tf.TensorShape((batch_size, ) + image_size)),
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(image_size, Iterable):
_outputs_shape = np.prod(image_size)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
kernel_size=3
stride = 2
c = list(map(lambda x: x// (stride*stride), image_size[:-1]))
c = (*c, intermediate_dim)
c
enc_lays = [
tf.keras.layers.UpSampling2D(size=(2, 2)),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=np.product(c), activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=c),
tf.keras.layers.Conv2DTranspose(filters=intermediate_dim, kernel_size=kernel_size, strides=(stride, stride), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=intermediate_dim, kernel_size=kernel_size, strides=(stride, stride), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=kernel_size, strides=(1, 1), padding="SAME")
]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'_Conv_Pre_Upsampling_coloring_Grayscale'
#windows
#experiments_dir='..' + sep_local + '..' + sep_local +'..' + sep_local + '..' + sep_local + '..'+sep_local+'experiments'+sep_local + model_name
#linux
experiments_dir=os.getcwd()+ sep_local +'experiments'+sep_local + model_name
variables_params = \
[
{
'name': 'inference', #'upsampler',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': enc_lays
}
,
{
'name': 'generative', #'constructive',
'inputs_shape':latents_dim,
'outputs_shape':image_size,
'layers':dec_lays
}
]
from os.path import abspath
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
absolute = abspath(_restore)
print("Restore_dir",absolute)
absolute = abspath(experiments_dir)
print("Recording_dir",absolute)
print("Current working dir",os.getcwd())
from training.autoencoding_basic.transformative.AE import autoencoder as AE
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None,#to restore trained model, set filepath=_restore
episode_len=1
)
image_size, inputs_shape
#ae.compile(metrics=None)
#ae.compile(metrics=create_metrics())
ae.compile()
###Output
_____no_output_____
###Markdown
Callbacks
###Code
# added for linux warning suppression
import logging
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
from training.callbacks.trace_image_reconstruction import trace_reconstruction
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, model_name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
absolute = abspath(csv_dir)
print("Csv_dir",absolute)
image_reconstuction_dir = os.path.join(experiments_dir, 'image_reconstuction_dir')
create_if_not_exist(image_reconstuction_dir)
absolute = abspath(image_reconstuction_dir)
print("image_reconstuction_dir",absolute)
image = put_text_fn(shrink_fn(np.asarray(trace_image)).numpy()/255.0)[0]
img_reconst = trace_reconstruction(filepath=image_reconstuction_dir, image=image, gen_freq=5)
###Output
_____no_output_____
###Markdown
Model Training
###Code
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, img_reconst],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____ |
graph_nets/demos_tf2/deep_sets.ipynb | ###Markdown
**Copyright 2019 The Sonnet Authors. All Rights Reserved.**Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.--- IntroductionThis tutorial assumes you have already completed (and understood!) the Sonnet 2 "Hello, world!" example (MLP on MNIST).In this tutorial, we're going to scale things up with a bigger model and bigger dataset, and we're going to distribute the computation across multiple devices. Preamble
###Code
import sys
assert sys.version_info >= (3, 6), "Sonnet 2 requires Python >=3.6"
!pip install dm-sonnet tqdm
import sonnet as snt
import tensorflow as tf
import tensorflow_datasets as tfds
print("TensorFlow version: {}".format(tf.__version__))
print(" Sonnet version: {}".format(snt.__version__))
###Output
TensorFlow version: 2.2.0
Sonnet version: 2.0.0
###Markdown
Finally lets take a quick look at the GPUs we have available:
###Code
!grep Model: /proc/driver/nvidia/gpus/*/information | awk '{$1="";print$0}'
###Output
Tesla P100-PCIE-16GB
###Markdown
Distribution strategyWe need a strategy to distribute our computation across several devices. Since Google Colab only provides a single GPU we'll split it into four virtual GPUs:
###Code
physical_gpus = tf.config.experimental.list_physical_devices("GPU")
physical_gpus
tf.config.experimental.set_virtual_device_configuration(
physical_gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2000)] * 4
)
gpus = tf.config.experimental.list_logical_devices("GPU")
gpus
###Output
_____no_output_____
###Markdown
When using Sonnet optimizers, we must use either `Replicator` or `TpuReplicator` from `snt.distribute`, or we can use `tf.distribute.OneDeviceStrategy`. `Replicator` is equivalent to `MirroredStrategy` and `TpuReplicator` is equivalent to `TPUStrategy`.
###Code
strategy = snt.distribute.Replicator(
["/device:GPU:{}".format(i) for i in range(4)],
tf.distribute.ReductionToOneDevice("GPU:0"))
###Output
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
###Markdown
DatasetBasically the same as the MNIST example, but this time we're using CIFAR-10. CIFAR-10 contains 32x32 pixel color images in 10 different classes (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks).
###Code
# NOTE: This is the batch size across all GPUs.
batch_size = 100 * 4
def process_batch(images, labels):
images = tf.cast(images, dtype=tf.float32)
images = ((images / 255.) - .5) * 2.
return images, labels
def cifar10(split):
dataset = tfds.load("cifar10", split=split, as_supervised=True)
dataset = dataset.map(process_batch)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
return dataset
cifar10_train = cifar10("train").shuffle(10)
cifar10_test = cifar10("test")
###Output
[1mDownloading and preparing dataset cifar10/3.0.0 (download: 162.17 MiB, generated: Unknown size, total: 162.17 MiB) to /root/tensorflow_datasets/cifar10/3.0.0...[0m
###Markdown
Model & OptimizerConveniently, there is a pre-built model in `snt.nets` designed specifically for this dataset.We must build our model and optimizer within the strategy scope, to ensure that any variables created are distributed correctly. Alternatively, we could enter the scope for the entire program using `tf.distribute.experimental_set_strategy`.
###Code
learning_rate = 0.1
with strategy.scope():
model = snt.nets.Cifar10ConvNet()
optimizer = snt.optimizers.Momentum(learning_rate, 0.9)
###Output
_____no_output_____
###Markdown
Training the modelThe Sonnet optimizers are designed to be as clean and simple as possible. They do not contain any code to deal with distributed execution. It therefore requires a few additional lines of code.We must aggregate the gradients calculated on the different devices. This can be done using `ReplicaContext.all_reduce`.Note that when using `Replicator` / `TpuReplicator` it is the user's responsibility to ensure that the values remain identical in all replicas.
###Code
def step(images, labels):
"""Performs a single training step, returning the cross-entropy loss."""
with tf.GradientTape() as tape:
logits = model(images, is_training=True)["logits"]
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
logits=logits))
grads = tape.gradient(loss, model.trainable_variables)
# Aggregate the gradients from the full batch.
replica_ctx = tf.distribute.get_replica_context()
grads = replica_ctx.all_reduce("mean", grads)
optimizer.apply(grads, model.trainable_variables)
return loss
@tf.function
def train_step(images, labels):
per_replica_loss = strategy.run(step, args=(images, labels))
return strategy.reduce("sum", per_replica_loss, axis=None)
def train_epoch(dataset):
"""Performs one epoch of training, returning the mean cross-entropy loss."""
total_loss = 0.0
num_batches = 0
# Loop over the entire training set.
for images, labels in dataset:
total_loss += train_step(images, labels).numpy()
num_batches += 1
return total_loss / num_batches
cifar10_train_dist = strategy.experimental_distribute_dataset(cifar10_train)
for epoch in range(20):
print("Training epoch", epoch, "...", end=" ")
print("loss :=", train_epoch(cifar10_train_dist))
###Output
Training epoch 0 ... WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
Evaluating the modelNote the use of the `axis` parameter with `strategy.reduce` to reduce across the batch dimension.
###Code
num_cifar10_test_examples = 10000
def is_predicted(images, labels):
logits = model(images, is_training=False)["logits"]
# The reduction over the batch happens in `strategy.reduce`, below.
return tf.cast(tf.equal(labels, tf.argmax(logits, axis=1)), dtype=tf.int32)
cifar10_test_dist = strategy.experimental_distribute_dataset(cifar10_test)
@tf.function
def evaluate():
"""Returns the top-1 accuracy over the entire test set."""
total_correct = 0
for images, labels in cifar10_test_dist:
per_replica_correct = strategy.run(is_predicted, args=(images, labels))
total_correct += strategy.reduce("sum", per_replica_correct, axis=0)
return tf.cast(total_correct, tf.float32) / num_cifar10_test_examples
print("Testing...", end=" ")
print("top-1 accuracy =", evaluate().numpy())
###Output
Testing... INFO:tensorflow:Reduce to GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
|
TransformerlessPowerSupply.ipynb | ###Markdown
TRANSFORMERLESS POWER SUPPLY
###Code
from IPython.display import display_latex
import math
from decimal import *
# Usage: display_full_latex('u_x')
def display_full_latex(idx):
if(isinstance(idx, str)):
eqn = '\\[' + idx + '\\]'
display_latex(eqn, raw=True)
else:
eqn = '\\[' + latex(idx) + '\\]'
display_latex(eqn, raw=True)
return
display_full_latex('Z = \\frac{ V_{rms} - V_{zener} }{I}')
display_full_latex('X_c = \\sqrt(Z^2 - R^2)')
display_full_latex('C = \\frac{1}{2\\pi fX_c}')
display_full_latex('V_{avg} = 0.637 Vpk')
display_full_latex('I = \\frac{V_{avg} - V_{zener}}{Z}')
###Output
_____no_output_____
###Markdown
Where Vref is the required imput voltage at the input node of the rectifier, and Iref is the desired current. Note 1.4V for two diode drops should be accounted for if non-ideal
###Code
V = 230
f = 50
R = 100
Vzener = 5.1
I = 0.1
Vpk = V * math.sqrt(2)
Vavg = 0.637 * Vpk
Z = (V - Vzener) / I
Xc = math.sqrt(Z*Z - R*R)
C = 1 / (2 * math.pi * f * Xc)
P = I * Vzener
print('Vpk = ', Vpk)
print('Z = ', Z)
print('Xc = ', Xc)
print('C = ', C)
print('P = ', P)
#print('C = ', Decimal(C).to_eng_string())
print(0.1 * 100000)
###Output
10000.0
|
LA04_Lineare_Algebra_Lineare_Gleichungssysteme.ipynb | ###Markdown
LA04 Lineare Algebra - Lineare Gleichungssysteme
###Code
###Output
_____no_output_____ |
NLP/NLP - IMDB.ipynb | ###Markdown
IMBD Dataset
###Code
from keras.callbacks import TensorBoard
from datetime import datetime
logdir = "logs\scalars\" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = TensorBoard(log_dir=logdir)
%load_ext tensorboard
%tensorboard --logdir logs/scalars
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 15000
maxlen = 500
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(
num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
from keras import Sequential
from keras.layers import LSTM, Embedding, Dense
from keras.optimizers import RMSprop
# import tensorflow as tf
# strategy = tf.distribute.MirroredStrategy()
# with strategy.scope():
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32, dropout=0.0, recurrent_dropout=0.7, return_sequences=True))
model.add(LSTM(32, dropout=0.0, recurrent_dropout=0.3, return_sequences=True))
model.add(LSTM(32, dropout=0.0, recurrent_dropout=0.0))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(
input_train,
y_train,
epochs=10,
batch_size=1024,
validation_split=0.2,
callbacks=[tensorboard_callback]
)
# from keras import Sequential
# from keras.layers import LSTM, Embedding, Dense
# from keras.utils import multi_gpu_model
# model = Sequential()
# parallel_model = multi_gpu_model(model, gpus=2)
# parallel_model.add(Embedding(max_features, 32))
# parallel_model.add(LSTM(32))
# parallel_model.add(Dense(1, activation='sigmoid'))
# parallel_model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
# history = parallel_model.fit(input_train, y_train, epochs=10, batch_size=128, validation_split=0.2)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/3. Check best of two classifier with test set [utterance level][180t].ipynb | ###Markdown
Test "best of two" classifier This notebook test a classifier that operates in two layers:- First we use a SVM classifier to label utterances with high degree of certainty.- Afterwards we use heuristics to complete the labeling
###Code
import os
import sys
import pandas as pd
import numpy as np
import random
import pickle
import matplotlib.pyplot as plt
root_path = os.path.dirname(os.path.abspath(os.getcwd()))
sys.path.append(root_path)
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from src import phase_classification as pc
data_path = os.path.join(root_path,'data')
tables_path = os.path.join(data_path,'tables')
results_path = os.path.join(root_path,'results')
output_path =os.path.join(results_path,'tables')
import importlib
importlib.reload(pc)
WITH_STEMMING = True
#REMOVE_STOPWORDS = True
SEED = 10
NUM_TOPICS = 60
random.seed(SEED)
CLASS_W = False
test_i = '[test1]'
file_name = test_i+'IBL_topic_distribution_by_utterance_before_after_{}_{}.xlsx'.format(WITH_STEMMING,NUM_TOPICS)
df_data = pd.read_excel(os.path.join(tables_path,'test',file_name))
the_keys = list(set(df_data['phase']))
total_samples = 0
class_samples = {}
for key in the_keys:
n = list(df_data.phase.values).count(key)
#print("key {}, total {}".format(key,n))
total_samples += n
class_samples[key] = n
print(total_samples)
for key in the_keys:
print("key {}, samples: {}, prop: {}".format(key,class_samples[key],round(class_samples[key]*1.0/total_samples,2)))
filter_rows = list(range(180))+[187,188]
row_label = 180
dfs_all,_ = pc.split_df_discussions(df_data,.0,SEED)
X_all,y_all_1 = pc.get_joined_data_from_df(dfs_all,filter_rows,row_label)
len(y_all_1)
t = 0.55
name_classifier = 'classifier_svm_before_after_best_of_two_cw_{}.pickle'.format(CLASS_W)
output_first_layer_1 = pc.first_layer_classifier(X_all,t,name_classifier)
comparison = list(zip(output_first_layer_1,y_all_1))
df_data['first_layer'] = output_first_layer_1
second_layer_1 = pc.second_layer_classifier_max_border(X_all,df_data,name_classifier)
df_data['second_layer'] = second_layer_1
df_data.to_excel(os.path.join(output_path,'[second_layer]'+file_name))
second_layer_1.count(-1)
len(second_layer_1)
third_layer_1 = [v if v>0 else 5 for v in second_layer_1]
labels = ["Phase {}".format(i) for i in range(1,6)]
df = pd.DataFrame(confusion_matrix(y_all_1, third_layer),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all_1, third_layer))
df
###Output
precision recall f1-score support
1 0.33 0.25 0.29 55
2 0.19 0.78 0.31 27
3 0.00 0.00 0.00 46
4 0.04 0.14 0.07 7
5 0.50 0.07 0.12 46
micro avg 0.22 0.22 0.22 181
macro avg 0.21 0.25 0.16 181
weighted avg 0.26 0.22 0.17 181
###Markdown
Test 2
###Code
test_i = '[test2]'
file_name = test_i+'IBL_topic_distribution_by_utterance_before_after_{}_{}.xlsx'.format(WITH_STEMMING,NUM_TOPICS)
df_data = pd.read_excel(os.path.join(tables_path,'test',file_name))
the_keys = list(set(df_data['phase']))
total_samples = 0
class_samples = {}
for key in the_keys:
n = list(df_data.phase.values).count(key)
#print("key {}, total {}".format(key,n))
total_samples += n
class_samples[key] = n
print(total_samples)
for key in the_keys:
print("key {}, samples: {}, prop: {}".format(key,class_samples[key],round(class_samples[key]*1.0/total_samples,2)))
dfs_all,_ = pc.split_df_discussions(df_data,.0,SEED)
X_all,y_all_2 = pc.get_joined_data_from_df(dfs_all,filter_rows,row_label)
output_first_layer_2 = pc.first_layer_classifier(X_all,t,name_classifier)
comparison = list(zip(output_first_layer_2,y_all_2))
df_data['first_layer'] = output_first_layer_2
second_layer_2 = pc.second_layer_classifier_max_border(X_all,df_data,name_classifier)
df_data['second_layer'] = second_layer_2
df_data.to_excel(os.path.join(output_path,'[second_layer]'+file_name))
third_layer_2 = [v if v>0 else 5 for v in second_layer_2]
second_layer_2.count(-1)
labels = ["Phase {}".format(i) for i in range(1,6)]
df = pd.DataFrame(confusion_matrix(y_all_2, third_layer),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all_2, third_layer))
df
y_all = y_all_1+y_all_2
pred = third_layer_1 + third_layer_2
df = pd.DataFrame(confusion_matrix(y_all, pred),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all, pred))
df
print("Accuracy {0:.3f}".format(np.sum(confusion_matrix(y_all, pred).diagonal())/len(y_all)))
bs = [pc.unit_vector(x) for x in y_all]
y_pred = [pc.unit_vector(x) for x in pred]
np.sqrt(np.sum([np.square(y_pred[i]-bs[i]) for i in range(len(y_all))])/(len(y_all)*2))
###Output
Accuracy 0.214
|
module4-make-features/LS_DS_124_Make_features.ipynb | ###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!ls
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
###Output
"","","5600","5600","5600"," 36 months"," 13.56%","190.21","C","C1","","n/a","RENT","15600","Not Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","836xx","ID","15.31","0","Aug-2012","0","","97","9","1","5996","34.5%","11","w","5083.61","5083.61","750.29","750.29","516.39","233.90","0.0","0.0","0.0","Feb-2019","190.21","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","5996","0","0","0","1","20","0","","0","2","3017","35","17400","1","0","0","3","750","4689","45.5","0","0","20","73","13","13","0","13","","20","","0","3","5","4","4","1","9","10","5","9","0","0","0","0","100","25","1","0","17400","5996","8600","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","23000","23000","23000"," 36 months"," 15.02%","797.53","C","C3","Tax Consultant","10+ years","MORTGAGE","75000","Source Verified","Oct-2018","Charged Off","n","","","debt_consolidation","Debt consolidation","352xx","AL","20.95","1","Aug-1985","2","22","","12","0","22465","43.6%","28","w","0.00","0.00","1547.08","1547.08","1025.67","521.41","0.0","0.0","0.0","Dec-2018","797.53","","Nov-2018","0","","1","Individual","","","","0","0","259658","4","2","3","3","6","18149","86","4","6","12843","56","51500","2","2","5","11","21638","26321","44.1","0","0","12","397","4","4","6","5","22","4","22","0","4","5","7","14","3","9","19","5","12","0","0","0","7","96.4","14.3","0","0","296500","40614","47100","21000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 15.02%","346.76","C","C3","security guard","5 years","MORTGAGE","38000","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","443xx","OH","13.16","3","Jul-1982","0","6","","11","0","5634","37.1%","16","w","9096.85","9096.85","1378.7","1378.70","903.15","475.55","0.0","0.0","0.0","Feb-2019","346.76","Mar-2019","Feb-2019","0","","1","Individual","","","","0","155","77424","0","1","0","0","34","200","10","1","1","1866","42","15200","2","0","0","2","7039","4537","50.1","0","0","34","434","11","11","3","11","6","17","6","0","3","5","5","6","1","8","11","5","11","0","0","0","1","73.3","40","0","0","91403","9323","9100","2000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","Payoff Clerk","10+ years","MORTGAGE","35360","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","381xx","TN","11.3","1","Jun-2006","0","21","","9","0","2597","27.3%","15","f","4538.94","4538.94","675.55","675.55","461.06","214.49","0.0","0.0","0.0","Feb-2019","169.83","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1413","69785","0","2","0","1","16","2379","40","3","4","1826","32","9500","0","0","1","5","8723","1174","60.9","0","0","147","85","9","9","2","10","21","9","21","0","1","3","2","2","6","6","7","3","9","0","0","0","3","92.9","50","0","0","93908","4976","3000","6028","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","9750"," 36 months"," 11.06%","327.68","B","B3","","n/a","RENT","44400","Source Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","980xx","WA","11.78","0","Oct-2008","2","40","","15","0","6269","13.1%","25","f","9044.84","8818.72","1295.36","1262.98","955.16","340.20","0.0","0.0","0.0","Feb-2019","327.68","Mar-2019","Feb-2019","0","53","1","Individual","","","","0","520","16440","3","1","1","1","2","10171","100","2","5","404","28","47700","0","3","5","6","1265","20037","2.3","0","0","61","119","1","1","0","1","","1","40","1","2","4","6","8","3","14","22","4","15","0","0","0","3","92","0","0","0","57871","16440","20500","10171","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 16.91%","356.08","C","C5","Key Accounts Manager","2 years","RENT","80000","Not Verified","Oct-2018","Current","n","","","other","Other","021xx","MA","17.72","1","Sep-2006","0","14","","17","0","1942","30.8%","31","w","9120.98","9120.98","1414.93","1414.93","879.02","535.91","0.0","0.0","0.0","Feb-2019","356.08","Mar-2019","Feb-2019","0","25","1","Individual","","","","0","0","59194","0","15","1","1","12","57252","85","0","0","1942","80","6300","0","5","0","1","3482","2058","48.5","0","0","144","142","40","12","0","131","30","","30","3","1","1","1","5","22","2","9","1","17","0","0","0","1","74.2","0","0","0","73669","59194","4000","67369","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2050909275
Total amount funded in policy code 2: 820109297
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
loans = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2)
loans.head()
#lets drop the columns containing 100% null values, to shrink the df
loans = loans.dropna(axis='columns', how='all')
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
loans.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Convert `int_rate`Define a function to remove percent signs from strings and convert to floats
###Code
float('13.56%'.strip('%'))
#manual way:
#loans['int_rate'] = loans['int_rate'].str.strip('%').astype(float)
#loans['int_rate'] = loans['int_rate'] / 100
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column
###Code
#function way:
def remove_percent(string):
return float(string.strip('%'))
loans['int_rate'] = loans['int_rate'].apply(remove_percent)
loans.head()
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
loans.emp_title.value_counts().head(20)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
loans.emp_title.isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values
###Code
#Capitalize
#strip spaces
#Replace NaN w/ 'unknown'
import numpy as np
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
loans['emp_title'] = loans['emp_title'].apply(clean_title)
loans['emp_title'].head(10)
loans['emp_title'].value_counts()
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
loans['emp_title'].str.contains('Manager').head(10)
loans['emp_title_manager'] = loans['emp_title'].str.contains('Manager')
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
loans['issue_d'].head()
loans['issue_d'].describe()
loans['issue_d'] = pd.to_datetime(loans['issue_d'], infer_datetime_format=True)
loans['issue_d'].describe()
loans['issue_year'] = loans.issue_d.dt.year
loans['issue_month'] = loans.issue_d.dt.month
loans['earliest_cr_line'] = pd.to_datetime(loans['earliest_cr_line'], infer_datetime_format=True)
loans['days_from_earliest_credit_to_issue'] = (loans['issue_d'] - loans['earliest_cr_line']).dt.days
loans['days_from_earliest_credit_to_issue'].describe()
[col for col in loans if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
loans[col] = pd.to_datetime(loans[col], infer_datetime_format=True)
loans.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
'''
1) Convert the term column from string to integer.
'''
def remove_months(string):
return int(string.strip('months'))
loans['term'] = loans['term'].apply(remove_months)
loans['term'].value_counts()
'''
2)Make a column named loan_status_is_great. It should contain the integer 1
if loan_status is "Current" or "Fully Paid." Else it should contain the
integer 0
'''
def great_loan_status(string):
if string == 'Current' or string == 'Fully Paid':
return int(1)
else:
return int(0)
loans['loan_status_is_great'] = loans['loan_status'].apply(great_loan_status)
loans['loan_status_is_great'].value_counts()
'''
3) Make last_pymnt_d_month and last_pymnt_d_year columns.
'''
#column w/ source of the data we need: loans['last_pymnt_d']
loans['last_pymnt_d_month'] = loans['last_pymnt_d'].dt.month
loans['last_pymnt_d_year'] = loans['last_pymnt_d'].dt.year
#quick inspection to make sure the columns were created and contain correct values:
loans[['last_pymnt_d_month', 'last_pymnt_d_year', 'last_pymnt_d']].head()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas!
###Code
'''LendingClub options:
1) There's one other column in the dataframe with percent signs. Remove them and
convert to floats. You'll need to handle missing values.
'''
#not quite sure how to search the whole df w/ str.contains for the %, so i just
#looked at head and found the column, meh
#handle the Na values:
'''After digging in to the dataframe and what this column represents, it should
equal the value in the ['revol_balance'] column divided by the [total_rev_hi_lim]
column..'''
loans['total_rev_hi_lim'].isnull().sum()
loans['revol_bal'].isnull().sum()
'''Since there are no null values for these columns, we can skip reformattting the
string values in revol_util and just recast the entire columns w vectorized
divsion'''
loans['revol_util'] = round((loans['revol_bal'] / loans['total_rev_hi_lim'])*100, 1)
loans['revol_util'].describe()
#check to verify no NaNs:
loans['revol_util'].isnull().sum()
#hmmmm, lets inspect the relevant columns to figure out why a few Nans remain:
loans[loans['revol_util'].isnull()== True][['revol_util', 'revol_bal', 'total_rev_hi_lim']].head()
loans[loans['revol_util'].isnull()== True][['revol_util', 'revol_bal', 'total_rev_hi_lim']].describe()
#Since all the values are 0, we can just fillna with zero:
loans['revol_util'] = loans['revol_util'].fillna(0)
#verify:
loans['revol_util'].isnull().sum()
'''2) Modify the emp_title column to replace titles with 'Other' if the title is
not in the top 20'''
#quick peek at the top 20 job titles:
loans['emp_title'].value_counts().head(20)
#Lets ignore the unknown job titles, and lets grab next top 20 value counts and
#put them in a list
top20 = list(loans['emp_title'].value_counts().head(21).index)
top20.pop(0)
top20
#function to edit the emp_title column:
def clean_emp(job):
if job in top20:
return job
else:
return 'Other'
#Apply the function to emp_title column:
loans['emp_title'] = loans['emp_title'].apply(clean_emp)
#Verify:
loans['emp_title'].value_counts()
###Output
_____no_output_____
###Markdown
You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q3.csv.zip
!unzip LoanStats_2018Q3.csv.zip
!head LoanStats_2018Q3.csv
###Output
_____no_output_____
###Markdown
Load LendingClub data
###Code
###Output
_____no_output_____
###Markdown
Work with strings
###Code
###Output
_____no_output_____
###Markdown
Work with dates
###Code
###Output
_____no_output_____
###Markdown
Load Instacart dataLet's return to the dataset of [3 Million Instacart Orders](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) If necessary, uncomment and run the cells below to re-download and extract the data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
###Output
_____no_output_____
###Markdown
Here's a list of the six CSV filenames
###Code
%cd instacart_2017_05_01
!ls -lh
###Output
_____no_output_____
###Markdown
Load the CSV files with pandas
###Code
###Output
_____no_output_____
###Markdown
Do feature engineering
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
###Output
"","","5600","5600","5600"," 36 months"," 13.56%","190.21","C","C1","","n/a","RENT","15600","Not Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","836xx","ID","15.31","0","Aug-2012","0","","97","9","1","5996","34.5%","11","w","5083.61","5083.61","750.29","750.29","516.39","233.90","0.0","0.0","0.0","Feb-2019","190.21","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","5996","0","0","0","1","20","0","","0","2","3017","35","17400","1","0","0","3","750","4689","45.5","0","0","20","73","13","13","0","13","","20","","0","3","5","4","4","1","9","10","5","9","0","0","0","0","100","25","1","0","17400","5996","8600","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","23000","23000","23000"," 36 months"," 15.02%","797.53","C","C3","Tax Consultant","10+ years","MORTGAGE","75000","Source Verified","Oct-2018","Charged Off","n","","","debt_consolidation","Debt consolidation","352xx","AL","20.95","1","Aug-1985","2","22","","12","0","22465","43.6%","28","w","0.00","0.00","1547.08","1547.08","1025.67","521.41","0.0","0.0","0.0","Dec-2018","797.53","","Nov-2018","0","","1","Individual","","","","0","0","259658","4","2","3","3","6","18149","86","4","6","12843","56","51500","2","2","5","11","21638","26321","44.1","0","0","12","397","4","4","6","5","22","4","22","0","4","5","7","14","3","9","19","5","12","0","0","0","7","96.4","14.3","0","0","296500","40614","47100","21000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 15.02%","346.76","C","C3","security guard","5 years","MORTGAGE","38000","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","443xx","OH","13.16","3","Jul-1982","0","6","","11","0","5634","37.1%","16","w","9096.85","9096.85","1378.7","1378.70","903.15","475.55","0.0","0.0","0.0","Feb-2019","346.76","Mar-2019","Feb-2019","0","","1","Individual","","","","0","155","77424","0","1","0","0","34","200","10","1","1","1866","42","15200","2","0","0","2","7039","4537","50.1","0","0","34","434","11","11","3","11","6","17","6","0","3","5","5","6","1","8","11","5","11","0","0","0","1","73.3","40","0","0","91403","9323","9100","2000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","Payoff Clerk","10+ years","MORTGAGE","35360","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","381xx","TN","11.3","1","Jun-2006","0","21","","9","0","2597","27.3%","15","f","4538.94","4538.94","675.55","675.55","461.06","214.49","0.0","0.0","0.0","Feb-2019","169.83","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1413","69785","0","2","0","1","16","2379","40","3","4","1826","32","9500","0","0","1","5","8723","1174","60.9","0","0","147","85","9","9","2","10","21","9","21","0","1","3","2","2","6","6","7","3","9","0","0","0","3","92.9","50","0","0","93908","4976","3000","6028","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","9750"," 36 months"," 11.06%","327.68","B","B3","","n/a","RENT","44400","Source Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","980xx","WA","11.78","0","Oct-2008","2","40","","15","0","6269","13.1%","25","f","9044.84","8818.72","1295.36","1262.98","955.16","340.20","0.0","0.0","0.0","Feb-2019","327.68","Mar-2019","Feb-2019","0","53","1","Individual","","","","0","520","16440","3","1","1","1","2","10171","100","2","5","404","28","47700","0","3","5","6","1265","20037","2.3","0","0","61","119","1","1","0","1","","1","40","1","2","4","6","8","3","14","22","4","15","0","0","0","3","92","0","0","0","57871","16440","20500","10171","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 16.91%","356.08","C","C5","Key Accounts Manager","2 years","RENT","80000","Not Verified","Oct-2018","Current","n","","","other","Other","021xx","MA","17.72","1","Sep-2006","0","14","","17","0","1942","30.8%","31","w","9120.98","9120.98","1414.93","1414.93","879.02","535.91","0.0","0.0","0.0","Feb-2019","356.08","Mar-2019","Feb-2019","0","25","1","Individual","","","","0","0","59194","0","15","1","1","12","57252","85","0","0","1942","80","6300","0","5","0","1","3482","2058","48.5","0","0","144","142","40","12","0","131","30","","30","3","1","1","1","5","22","2","9","1","17","0","0","0","1","74.2","0","0","0","73669","59194","4000","67369","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2050909275
Total amount funded in policy code 2: 820109297
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('LoanStats_2018Q4.csv',skiprows=1, skipfooter=2, engine='python')
## engine='python' due to warning message when we did it. Because of skipfooter
df.shape
df.info()
# want to skip the first row in the beginning.. as you can see from initially looking at the shape and the info..
# the first line is treated as the column and the rest is it's rows
# this is what the 'skiprows' parameter does
# because we looked at the tail.. we know there are data points in the end we have to deal with as well
df.isnull().sum().sort_values()
## there are at least 2 nulls for everything.. that is weird.
## id is always null.. weird.
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
## options just makes sure to show data instead of seeing '...' in large datasets
df.head()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number') ## show which features are strings
## one can see that some columns that you would think are integers, are still treated as strings
## for certain reasons
###Output
_____no_output_____
###Markdown
Convert `int_rate` Directly manipulate the 'int_rate' column
###Code
type(float('13.56%'.strip('%')))
## strip percentage signs and change type to float
df['int_rate'] = df['int_rate'].str.strip('%').astype(float)
df['int_rate'].head()
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats
###Code
string = '13.56%'
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
df['int_rate'] = df['int_rate'].apply(remove_percent)
df['int_rate'].head()
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column ^^ Clean `emp_title`Look at top 20 titles
###Code
df['emp_title'].value_counts().head(20)
## can see some trailing spaces.. some NaNs, Mispellings, casing differences, aggregate types of roles?
## need some cleanings
df['emp_title'].isnull().sum()
## What to do
## consistent casing, spacing, and replace NaN's with unkwown or missing
examples = ['owner', 'Supervisor ', ' Project Manager', np.nan]
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
[clean_title(x) for x in examples]
## function that cleans data.. checked on small dataset first
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].head(10)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts(normalize=True)
df.groupby('emp_title_manager')['int_rate'].mean()
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head().values
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df['issue_d'].describe()
## .dt can help you get more specific outcomes from the dates! cool
## can add columns with this.. and analyze those
df['issue_d'].dt.month.head()
df['issue_month'] = df['issue_d'].dt.month
df['issue_month'].head()
## repeat example. .addition with dates
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'], infer_datetime_format=True)
df['earliest_cr_line'].head()
## time between credit line opened and issue date of first loan
## dt.days will make the date time objects into integers.. so you can deal with them
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
## lots of cool things can be dones with data.. this is incredible
## dealing with columns you want to do things to.. as a group
[col for col in df if col.endswith('_d')]
for col in ['issue_d', 'last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns. ***convert 'term' column from string to integer***
###Code
# check the column to see current state
df['term'].head()
df['term'].isnull().sum()
## strip the ' months' from the data and change type to integer
df['term'] = df['term'].str.strip(' months').astype(int)
df['term'].head()
###Output
_____no_output_____
###Markdown
***Make column named 'loan_status_is_great'. It should contain the integer 1 if loan_status is "Current" or "Fully Paid." Else it should contain the integer 0.***
###Code
# check data in column
df['loan_status'].isnull().sum()
# check to see if data contains 'Current' OR 'Fully Paid'.
# contains returns Boolean.. so can change type to 'int' and it will
# return 1's and 0's
df['loan_status_is_great'] = df['loan_status'].str.contains('Current|Fully Paid').astype(int)
df.columns
df['loan_status_is_great'].sample(10)
###Output
_____no_output_____
###Markdown
***Make last_pymnt_d_month and last_pymnt_d_year columns.***
###Code
## 'last_pymnt_d_month' column
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.strftime('%b')
# 'last_pymnt_d_year' column
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.strftime('%Y')
#check to see if columns were added to the dataframe
df[['last_pymnt_d_month', 'last_pymnt_d_year', 'last_pymnt_d']].sample(20)
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series - Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns. Get LendingClub data
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q3.csv.zip
!unzip LoanStats_2018Q3.csv.zip
!head LoanStats_2018Q3.csv
###Output
_____no_output_____
###Markdown
Load LendingClub data
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q3.csv', skiprows=1, skipfooter=2)
df.shape
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.head(5)
###Output
_____no_output_____
###Markdown
Work with strings Assignment
###Code
df.term.head(5)
df.term = df.term.str.replace(' months', '')
df.term = df.term.astype(int)
df.term.head(5)
df['loan_status_is_great'] = df.loan_status.replace('Current', 1)
df[df.loan_status_is_great !=1] = 0
df['last_pymnt_d'] = pd.to_datetime(df['last_pymnt_d'], infer_datetime_format=True)
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
df.last_pymnt_d_month.head(10)
df.last_pymnt_d_year.head(10)
###Output
_____no_output_____
###Markdown
Work with dates
###Code
###Output
_____no_output_____
###Markdown
Load Instacart dataLet's return to the dataset of [3 Million Instacart Orders](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) If necessary, uncomment and run the cells below to re-download and extract the data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
###Output
_____no_output_____
###Markdown
Here's a list of the six CSV filenames
###Code
%cd instacart_2017_05_01
!ls -lh
###Output
_____no_output_____
###Markdown
Load the CSV files with pandas
###Code
###Output
_____no_output_____
###Markdown
Do feature engineering
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
# These are linux commands
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
###Output
"","","5600","5600","5600"," 36 months"," 13.56%","190.21","C","C1","","n/a","RENT","15600","Not Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","836xx","ID","15.31","0","Aug-2012","0","","97","9","1","5996","34.5%","11","w","5083.61","5083.61","750.29","750.29","516.39","233.90","0.0","0.0","0.0","Feb-2019","190.21","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","5996","0","0","0","1","20","0","","0","2","3017","35","17400","1","0","0","3","750","4689","45.5","0","0","20","73","13","13","0","13","","20","","0","3","5","4","4","1","9","10","5","9","0","0","0","0","100","25","1","0","17400","5996","8600","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","23000","23000","23000"," 36 months"," 15.02%","797.53","C","C3","Tax Consultant","10+ years","MORTGAGE","75000","Source Verified","Oct-2018","Charged Off","n","","","debt_consolidation","Debt consolidation","352xx","AL","20.95","1","Aug-1985","2","22","","12","0","22465","43.6%","28","w","0.00","0.00","1547.08","1547.08","1025.67","521.41","0.0","0.0","0.0","Dec-2018","797.53","","Nov-2018","0","","1","Individual","","","","0","0","259658","4","2","3","3","6","18149","86","4","6","12843","56","51500","2","2","5","11","21638","26321","44.1","0","0","12","397","4","4","6","5","22","4","22","0","4","5","7","14","3","9","19","5","12","0","0","0","7","96.4","14.3","0","0","296500","40614","47100","21000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 15.02%","346.76","C","C3","security guard","5 years","MORTGAGE","38000","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","443xx","OH","13.16","3","Jul-1982","0","6","","11","0","5634","37.1%","16","w","9096.85","9096.85","1378.7","1378.70","903.15","475.55","0.0","0.0","0.0","Feb-2019","346.76","Mar-2019","Feb-2019","0","","1","Individual","","","","0","155","77424","0","1","0","0","34","200","10","1","1","1866","42","15200","2","0","0","2","7039","4537","50.1","0","0","34","434","11","11","3","11","6","17","6","0","3","5","5","6","1","8","11","5","11","0","0","0","1","73.3","40","0","0","91403","9323","9100","2000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","Payoff Clerk","10+ years","MORTGAGE","35360","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","381xx","TN","11.3","1","Jun-2006","0","21","","9","0","2597","27.3%","15","f","4538.94","4538.94","675.55","675.55","461.06","214.49","0.0","0.0","0.0","Feb-2019","169.83","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1413","69785","0","2","0","1","16","2379","40","3","4","1826","32","9500","0","0","1","5","8723","1174","60.9","0","0","147","85","9","9","2","10","21","9","21","0","1","3","2","2","6","6","7","3","9","0","0","0","3","92.9","50","0","0","93908","4976","3000","6028","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","9750"," 36 months"," 11.06%","327.68","B","B3","","n/a","RENT","44400","Source Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","980xx","WA","11.78","0","Oct-2008","2","40","","15","0","6269","13.1%","25","f","9044.84","8818.72","1295.36","1262.98","955.16","340.20","0.0","0.0","0.0","Feb-2019","327.68","Mar-2019","Feb-2019","0","53","1","Individual","","","","0","520","16440","3","1","1","1","2","10171","100","2","5","404","28","47700","0","3","5","6","1265","20037","2.3","0","0","61","119","1","1","0","1","","1","40","1","2","4","6","8","3","14","22","4","15","0","0","0","3","92","0","0","0","57871","16440","20500","10171","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 16.91%","356.08","C","C5","Key Accounts Manager","2 years","RENT","80000","Not Verified","Oct-2018","Current","n","","","other","Other","021xx","MA","17.72","1","Sep-2006","0","14","","17","0","1942","30.8%","31","w","9120.98","9120.98","1414.93","1414.93","879.02","535.91","0.0","0.0","0.0","Feb-2019","356.08","Mar-2019","Feb-2019","0","25","1","Individual","","","","0","0","59194","0","15","1","1","12","57252","85","0","0","1942","80","6300","0","5","0","1","3482","2058","48.5","0","0","144","142","40","12","0","131","30","","30","3","1","1","1","5","22","2","9","1","17","0","0","0","1","74.2","0","0","0","73669","59194","4000","67369","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2050909275
Total amount funded in policy code 2: 820109297
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
# Feautures are essentially another fancy term for specified columns
import pandas as pd
df = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2, engine='python')
df.info() # That weird column is throwing things off
df.columns.tolist() # see all columns
df.isnull().sum().sum()
# How to extend rows
pd.options.display.max_columns = 500
pd.options.display.max_columns = 500
df.head()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Convert `int_rate`
###Code
'13.56%'.strip('%') # How you strip symbols from a str
type('13.56%'.strip('%')) # still a string
float('13.56%'.strip('%')) # put all of this within a float and that accomplishes the job
# This is the one-liner that you really want to work with, most optimal
df['int_rate'] = df['int_rate'].str.strip('%').astype(float)
#df['int_rate'].str.strip('%').astype(float).head() # Now it is a float, although not perm
df['int_rate'].head() # Now it is a perm. float
#(df['int_rate'] / 100).head # Vectorizing the DF
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats
###Code
string = '13.56%'
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column
###Code
# Both of those methods return the same results
# You got an error, because you already did it above
#df['int_rate'] = df['int_rate'].apply(remove_percent).head()
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
# Looking at the Top 20, just in the DS
df['emp_title'].head(20)
# Sometimes, you may want to combine titles
df['emp_title'].value_counts().head(20) # These are the top 20 positions, organized
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values
###Code
import numpy as np
# I want to have consistant Capitalization
# Clean trailing values -- strip spaces
# Replace NaN with 'Missing'
# This can be used for various things, numbers etc
examples = ['owner', 'Supervisor ',
' Project Manager', np.nan]
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Robot'
[clean_title(x) for x in examples]
# This is applying our previous function to all of the data in Emp Title
# Overwrite it & Assign back to that col
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
# Specify where Manager is
# Then add a new column to show where that is a true/ false col
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
# verify with shape
# Normalize shows percentage
df['emp_title_manager'].value_counts(normalize=True)
df.groupby('emp_title_manager')['int_rate'].mean()
df['emp_title'].value_counts().head()
# This shows how to represent that by a percentage
df.isnull().sum().sort_values(ascending=False) / len(df)
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head().values
df['issue_d'].describe()
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df['issue_d'].describe()
# Specify year and month and create new cols
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
# Random Values
df['issue_month'].sample(n=10).values
# Time elapsed
#df['earliest_cr_line'].head()
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
# See how many days from ECL
df['earliest_cr_line'].head()
# Difference between the 2 dates
# Really cool
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
25171 / 365
# Creating your own code to look for certain cols
[col for col in df if col.endswith('_d')]
for col in ['issue_d', 'last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer. (*DONE)- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0. (*DONE)- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.(*DONE) **Convert the 'term' column from string to integer**
###Code
# I did not use the .astype fucntion because the dtype was already an int, just needed months removed
df['term'] = df['term'].str.strip('months')
# It works!
df['term'].value_counts()
###Output
_____no_output_____
###Markdown
**Make a column named 'loan_status_is_great'. It should contain:* The integer 1 if 'loan_status' is 'Current' or 'Fully Paid'* Else, it should contain 0
###Code
df['loan_status'].value_counts(dropna=False)
# Kept getting an error when I tried to take this out of one-line
# I am sure I could have simplified this by creating a dictionary or if/else
# But this worked, so moving on
df['loan_status_is_great'] = df['loan_status'].replace('Current', 1).replace('Fully Paid', 1).replace('Late (31-120 days)', 0).replace('In Grace Period', 0).replace('Late (16-30 days)', 0 ).replace('Charged Off', 0)
# It worked!
df['loan_status_is_great'].value_counts()
###Output
_____no_output_____
###Markdown
**Make 'last_pymnt_d_month' and 'last_pymnt_d_year' columns**
###Code
# Years Only
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
# Months Only
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_year'].head()
df['last_pymnt_d_month'].head()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
LendingClub
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Locate the only column left that contains % signs* Remove the %* Convert to floats* Handle Missing Values
###Code
# I found which column by sheer luck, but I am sure there is another way to specify this
df['revol_util'].value_counts().head()
# Lets see how many Missing Values we are dealing with
df['revol_util'].isna().sum()
# I love this line of code
# In one shot, we removed the % and converted them to a float
df['revol_util'] = df['revol_util'].str.strip('%').astype(float)
# Since we did not have data for this, I decided to replace this way
df['revol_util'] = df['revol_util'].replace(np.nan, '-')
# No more NaN
df['revol_util'].isnull().sum()
###Output
_____no_output_____
###Markdown
Modify the 'emp_title' column to replace titles with 'Other' if title is not in the top 20
###Code
df['emp_title'].value_counts().head(20)
# Looking at only the top 20, and assigning a variable to it
top_counts = df['emp_title'].value_counts()[:21]
# Indexing
top_counts = top_counts.reset_index()
top_counts = top_counts['index'].as_matrix()
top_counts
# The coolest thing
top_counts = frozenset(top_counts)
top_counts
# Create a function for this
def the_others(x):
if x in top_counts:
return x
else:
return 'Other'
df['emp_title'] = df['emp_title'].apply(the_others)
# Final
df['emp_title'].value_counts()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
###Output
"","","5600","5600","5600"," 36 months"," 13.56%","190.21","C","C1","","n/a","RENT","15600","Not Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","836xx","ID","15.31","0","Aug-2012","0","","97","9","1","5996","34.5%","11","w","5083.61","5083.61","750.29","750.29","516.39","233.90","0.0","0.0","0.0","Feb-2019","190.21","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","5996","0","0","0","1","20","0","","0","2","3017","35","17400","1","0","0","3","750","4689","45.5","0","0","20","73","13","13","0","13","","20","","0","3","5","4","4","1","9","10","5","9","0","0","0","0","100","25","1","0","17400","5996","8600","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","23000","23000","23000"," 36 months"," 15.02%","797.53","C","C3","Tax Consultant","10+ years","MORTGAGE","75000","Source Verified","Oct-2018","Charged Off","n","","","debt_consolidation","Debt consolidation","352xx","AL","20.95","1","Aug-1985","2","22","","12","0","22465","43.6%","28","w","0.00","0.00","1547.08","1547.08","1025.67","521.41","0.0","0.0","0.0","Dec-2018","797.53","","Nov-2018","0","","1","Individual","","","","0","0","259658","4","2","3","3","6","18149","86","4","6","12843","56","51500","2","2","5","11","21638","26321","44.1","0","0","12","397","4","4","6","5","22","4","22","0","4","5","7","14","3","9","19","5","12","0","0","0","7","96.4","14.3","0","0","296500","40614","47100","21000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 15.02%","346.76","C","C3","security guard","5 years","MORTGAGE","38000","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","443xx","OH","13.16","3","Jul-1982","0","6","","11","0","5634","37.1%","16","w","9096.85","9096.85","1378.7","1378.70","903.15","475.55","0.0","0.0","0.0","Feb-2019","346.76","Mar-2019","Feb-2019","0","","1","Individual","","","","0","155","77424","0","1","0","0","34","200","10","1","1","1866","42","15200","2","0","0","2","7039","4537","50.1","0","0","34","434","11","11","3","11","6","17","6","0","3","5","5","6","1","8","11","5","11","0","0","0","1","73.3","40","0","0","91403","9323","9100","2000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","Payoff Clerk","10+ years","MORTGAGE","35360","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","381xx","TN","11.3","1","Jun-2006","0","21","","9","0","2597","27.3%","15","f","4538.94","4538.94","675.55","675.55","461.06","214.49","0.0","0.0","0.0","Feb-2019","169.83","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1413","69785","0","2","0","1","16","2379","40","3","4","1826","32","9500","0","0","1","5","8723","1174","60.9","0","0","147","85","9","9","2","10","21","9","21","0","1","3","2","2","6","6","7","3","9","0","0","0","3","92.9","50","0","0","93908","4976","3000","6028","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","9750"," 36 months"," 11.06%","327.68","B","B3","","n/a","RENT","44400","Source Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","980xx","WA","11.78","0","Oct-2008","2","40","","15","0","6269","13.1%","25","f","9044.84","8818.72","1295.36","1262.98","955.16","340.20","0.0","0.0","0.0","Feb-2019","327.68","Mar-2019","Feb-2019","0","53","1","Individual","","","","0","520","16440","3","1","1","1","2","10171","100","2","5","404","28","47700","0","3","5","6","1265","20037","2.3","0","0","61","119","1","1","0","1","","1","40","1","2","4","6","8","3","14","22","4","15","0","0","0","3","92","0","0","0","57871","16440","20500","10171","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 16.91%","356.08","C","C5","Key Accounts Manager","2 years","RENT","80000","Not Verified","Oct-2018","Current","n","","","other","Other","021xx","MA","17.72","1","Sep-2006","0","14","","17","0","1942","30.8%","31","w","9120.98","9120.98","1414.93","1414.93","879.02","535.91","0.0","0.0","0.0","Feb-2019","356.08","Mar-2019","Feb-2019","0","25","1","Individual","","","","0","0","59194","0","15","1","1","12","57252","85","0","0","1942","80","6300","0","5","0","1","3482","2058","48.5","0","0","144","142","40","12","0","131","30","","30","3","1","1","1","5","22","2","9","1","17","0","0","0","1","74.2","0","0","0","73669","59194","4000","67369","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2050909275
Total amount funded in policy code 2: 820109297
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2, engine='python')
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.tail()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Convert `int_rate`
###Code
'13.56%'.strip('%')
type('13.56%'.strip('%'))
float('13.56%'.strip('%'))
type(float('13.56%'.strip('%')))
df['int_rate'] = df['int_rate'].str.strip('%').astype(float).head()
(df['int_rate'].head() / 100).head()
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats
###Code
string = '13.56%'
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column
###Code
# df['int_rate'] = df['int_rate'].apply(remove_percent)
# df['int_rate'].apply(remove_percent)
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values- Capitalize- Strip spaces- Replace `Nan` with `Missing`
###Code
import numpy as np
examples = ['owner', 'Supervisor ',
' Project Manager', np.nan]
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
[clean_title(x) for x in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].head(10)
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts(normalize=True)
df.groupby('emp_title_manager')['int_rate'].mean()
df['emp_title'].nunique()
df['emp_title'].value_counts()
df.isnull().sum().sort_values(ascending=False) / len(df)
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head().values
df['issue_d'].describe()
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].describe()
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.shape
df['issue_month'].sample(n=10).values
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['earliest_cr_line'].head()
df['days_from_earliest_credit_to_issue'] = (
df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
25171 / 365
[col for col in df if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
df['term'].value_counts(dropna=False)
df['term'] = df['term'].str.strip(' months').astype(int)
df['term'].head(10)
df = df.rename(columns = {'term':'term_months'})
df['term_months'].head(10)
df['loan_status'].value_counts(dropna=False)
df.shape
df['loan_status_is_great'] = df['loan_status'].map({'Current': 1, 'Fully Paid': 1, 'Late (31-120 days)': 0, 'In Grace Period': 0, 'Late (16-30 days)': 0, 'Charged Off': 0 }) #.apply(rate_status) #.str.contains('Current', regex=True)
df['loan_status_is_great'].head(100)
df.shape
df['loan_status_is_great'].value_counts(dropna=False)
123768 + 3438
509 + 441 + 223 + 33
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas!
###Code
df['revol_util'].isnull().sum()
df['revol_util'] = df['revol_util'].fillna(0)
df['revol_util'].isnull().sum()
df['revol_util'] = df['revol_util'].str.strip('%').astype(float)
df['revol_util'].head(10)
df['revol_util'] = df['revol_util'].fillna(0)
df['revol_util'].isnull().sum()
df['revol_util'].head(10)
df['revol_util'] = df['revol_util'] / 100
df['revol_util'].head(10)
###Output
_____no_output_____
###Markdown
You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
###Output
Notes offered by Prospectus (https://www.lendingclub.com/info/prospectus.action)
"id","member_id","loan_amnt","funded_amnt","funded_amnt_inv","term","int_rate","installment","grade","sub_grade","emp_title","emp_length","home_ownership","annual_inc","verification_status","issue_d","loan_status","pymnt_plan","url","desc","purpose","title","zip_code","addr_state","dti","delinq_2yrs","earliest_cr_line","inq_last_6mths","mths_since_last_delinq","mths_since_last_record","open_acc","pub_rec","revol_bal","revol_util","total_acc","initial_list_status","out_prncp","out_prncp_inv","total_pymnt","total_pymnt_inv","total_rec_prncp","total_rec_int","total_rec_late_fee","recoveries","collection_recovery_fee","last_pymnt_d","last_pymnt_amnt","next_pymnt_d","last_credit_pull_d","collections_12_mths_ex_med","mths_since_last_major_derog","policy_code","application_type","annual_inc_joint","dti_joint","verification_status_joint","acc_now_delinq","tot_coll_amt","tot_cur_bal","open_acc_6m","open_act_il","open_il_12m","open_il_24m","mths_since_rcnt_il","total_bal_il","il_util","open_rv_12m","open_rv_24m","max_bal_bc","all_util","total_rev_hi_lim","inq_fi","total_cu_tl","inq_last_12m","acc_open_past_24mths","avg_cur_bal","bc_open_to_buy","bc_util","chargeoff_within_12_mths","delinq_amnt","mo_sin_old_il_acct","mo_sin_old_rev_tl_op","mo_sin_rcnt_rev_tl_op","mo_sin_rcnt_tl","mort_acc","mths_since_recent_bc","mths_since_recent_bc_dlq","mths_since_recent_inq","mths_since_recent_revol_delinq","num_accts_ever_120_pd","num_actv_bc_tl","num_actv_rev_tl","num_bc_sats","num_bc_tl","num_il_tl","num_op_rev_tl","num_rev_accts","num_rev_tl_bal_gt_0","num_sats","num_tl_120dpd_2m","num_tl_30dpd","num_tl_90g_dpd_24m","num_tl_op_past_12m","pct_tl_nvr_dlq","percent_bc_gt_75","pub_rec_bankruptcies","tax_liens","tot_hi_cred_lim","total_bal_ex_mort","total_bc_limit","total_il_high_credit_limit","revol_bal_joint","sec_app_earliest_cr_line","sec_app_inq_last_6mths","sec_app_mort_acc","sec_app_open_acc","sec_app_revol_util","sec_app_open_act_il","sec_app_num_rev_accts","sec_app_chargeoff_within_12_mths","sec_app_collections_12_mths_ex_med","sec_app_mths_since_last_major_derog","hardship_flag","hardship_type","hardship_reason","hardship_status","deferral_term","hardship_amount","hardship_start_date","hardship_end_date","payment_plan_start_date","hardship_length","hardship_dpd","hardship_loan_status","orig_projected_additional_accrued_interest","hardship_payoff_balance_amount","hardship_last_payment_amount","disbursement_method","debt_settlement_flag","debt_settlement_flag_date","settlement_status","settlement_date","settlement_amount","settlement_percentage","settlement_term"
"","","2500","2500","2500"," 36 months"," 13.56%","84.92","C","C1","Chef","10+ years","RENT","55000","Not Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","109xx","NY","18.24","0","Apr-2001","1","","45","9","1","4341","10.3%","34","w","2386.02","2386.02","167.02","167.02","113.98","53.04","0.0","0.0","0.0","Feb-2019","84.92","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","16901","2","2","1","2","2","12560","69","2","7","2137","28","42000","1","11","2","9","1878","34360","5.9","0","0","140","212","1","1","0","1","","2","","0","2","5","3","3","16","7","18","5","9","0","0","0","3","100","0","1","0","60124","16901","36500","18124","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","30000","30000","30000"," 60 months"," 18.94%","777.23","D","D2","Postmaster ","10+ years","MORTGAGE","90000","Source Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","713xx","LA","26.52","0","Jun-1987","0","71","75","13","1","12315","24.2%","44","w","29387.75","29387.75","1507.11","1507.11","612.25","894.86","0.0","0.0","0.0","Feb-2019","777.23","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1208","321915","4","4","2","3","3","87153","88","4","5","998","57","50800","2","15","2","10","24763","13761","8.3","0","0","163","378","4","3","3","4","","4","","0","2","4","4","9","27","8","14","4","13","0","0","0","6","95","0","1","0","372872","99468","15000","94072","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 17.97%","180.69","D","D1","Administrative","6 years","MORTGAGE","59280","Source Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","490xx","MI","10.51","0","Apr-2011","0","","","8","0","4599","19.1%","13","w","4787.21","4787.21","353.89","353.89","212.79","141.10","0.0","0.0","0.0","Feb-2019","180.69","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","110299","0","1","0","2","14","7150","72","0","2","0","35","24100","1","5","0","4","18383","13800","0","0","0","87","92","15","14","2","77","","14","","0","0","3","3","3","4","6","7","3","8","0","0","0","0","100","0","0","0","136927","11749","13800","10000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","4000","4000","4000"," 36 months"," 18.94%","146.51","D","D2","IT Supervisor","10+ years","MORTGAGE","92000","Source Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","985xx","WA","16.74","0","Feb-2006","0","","","10","0","5468","78.1%","13","w","3831.93","3831.93","286.71","286.71","168.07","118.64","0.0","0.0","0.0","Feb-2019","146.51","Mar-2019","Feb-2019","0","","1","Individual","","","","0","686","305049","1","5","3","5","5","30683","68","0","0","3761","70","7000","2","4","3","5","30505","1239","75.2","0","0","62","154","64","5","3","64","","5","","0","1","2","1","2","7","2","3","2","10","0","0","0","3","100","100","0","0","385183","36151","5000","44984","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","30000","30000","30000"," 60 months"," 16.14%","731.78","C","C4","Mechanic","10+ years","MORTGAGE","57250","Not Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","212xx","MD","26.35","0","Dec-2000","0","","","12","0","829","3.6%","26","w","29339.02","29339.02","1423.21","1423.21","660.98","762.23","0.0","0.0","0.0","Feb-2019","731.78","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","116007","3","5","3","5","4","28845","89","2","4","516","54","23100","1","0","0","9","9667","8471","8.9","0","0","53","216","2","2","2","2","","13","","0","2","2","3","8","9","6","15","2","12","0","0","0","5","92.3","0","0","0","157548","29674","9300","32332","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5550","5550","5550"," 36 months"," 15.02%","192.45","C","C3","Director COE","10+ years","MORTGAGE","152500","Not Verified","Dec-2018","Current","n","","","credit_card","Credit card refinancing","461xx","IN","37.94","0","Sep-2002","3","","","18","0","53854","48.1%","44","w","5302.50","5302.50","377.95","377.95","247.50","130.45","0.0","0.0","0.0","Feb-2019","192.45","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","685749","1","7","2","3","4","131524","72","1","4","17584","58","111900","2","4","6","8","40338","23746","64","0","0","195","176","10","4","6","20","","3","","0","4","6","6","10","23","9","15","7","18","0","0","0","4","100","60","0","0","831687","185378","65900","203159","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","2000","2000","2000"," 36 months"," 17.97%","72.28","D","D1","Account Manager","4 years","RENT","51000","Source Verified","Dec-2018","Current","n","","","debt_consolidation","Debt consolidation","606xx","IL","2.4","0","Nov-2004","1","","","1","0","0","","9","w","1914.71","1914.71","141.56","141.56","85.29","56.27","0.0","0.0","0.0","Feb-2019","72.28","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","854","0","0","2","3","7","0","","0","1","0","100","0","0","0","1","4","854","","","0","0","169","40","23","7","0","","","1","","0","0","0","0","3","5","0","3","0","1","0","0","0","2","100","","0","0","854","854","0","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","6000","6000","6000"," 36 months"," 13.56%","203.79","C","C1","Assistant Director","10+ years","RENT","65000","Source Verified","Dec-2018","Current","n","","","credit_card","Credit card refinancing","460xx","IN","30.1","0","Nov-1997","0","","","19","0","38476","69.3%","37","w","5864.01","5864.01","201.53","201.53","135.99","65.54","0.0","0.0","0.0","Feb-2019","208.31","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","91535","0","5","0","1","23","53059","87","0","2","9413","74","55500","1","2","0","3","5085","3034","90.8","0","0","169","253","13","13","1","14","","13","","0","7","12","8","10","15","14","20","12","19","0","0","0","0","100","85.7","0","0","117242","91535","33100","61742","","","","","","","","","","","","N","","","","","","","","","","","","","","","DirectPay","N","","","","","",""
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2, engine='python')
pd.options.display.max_columns = 500
pd.options.display.max_rows = 100
df.head()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number')
df['int_rate'] = df['int_rate'].str.replace("%","").astype(float)
df['int_rate'].head()
df['int_rate'].isnull().sum()
###Output
_____no_output_____
###Markdown
Convert `int_rate`Define a function to remove percent signs from strings and convert to floats
###Code
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column
###Code
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
df['emp_title'].head(20)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values
###Code
import numpy as np
test = ['Postmaster ', ' Account Manager', 'respritory therapist', np.nan]
def clean_emp(x):
if isinstance(x, str):
x = x.strip().title()
else:
x = 'Unkown'
return x
[clean_emp(x) for x in test]
df['emp_title'] = df['emp_title'].apply(clean_emp)
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head()
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head()
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'], infer_datetime_format=True)
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].head()
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Convert the term column from string to integer
###Code
df['term'].nunique()
df['term'] = df['term'].str.strip('months').astype(int)
df['term'].dtype
###Output
_____no_output_____
###Markdown
Make a column named loan_status_is_great. It should contain the integer 1 if loan_status is "Current" or "Fully Paid." Else it should contain the integer 0.
###Code
df['loan_status'].value_counts(normalize=True)
df['loan_status'].isnull().sum()
def great_loan(x):
if x == 'Current' or x == 'Fully Paid':
return 1
else:
return 0
df['loan_status_is_great'] = [great_loan(x) for x in df['loan_status']]
df['loan_status_is_great'].value_counts(normalize=True)
df['loan_status_is_great'].dtype
###Output
_____no_output_____
###Markdown
Make last_pymnt_d_month and last_pymnt_d_year columns.
###Code
df['last_pymnt_d']()
df['last_pymnt_d'].isnull().sum()
df['last_pymnt_d'].nunique()
df['last_pymnt_d'].value_counts()
df['last_pymnt_d'] = pd.to_datetime(df['last_pymnt_d'], infer_datetime_format=True)
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
df['last_pymnt_d_year'].value_counts()
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_month'].value_counts()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.
###Code
df.head()
df['revol_util'].isnull().sum()
def convert(x):
if isinstance(x, str):
x = x.strip('%')
return float(x)
else:
return x
df['revol_util'] = [ convert(x) for x in df['revol_util'] ]
df['revol_util'].head()
###Output
_____no_output_____
###Markdown
Modify the emp_title column to replace titles with 'Other' if the title is not in the top 20.- Note: I am including the top 21 as I view 'Unkown' as distinct from the rest of the job titles.
###Code
df['emp_title'].value_counts()[:21]
top_titles = df['emp_title'].value_counts()[:21]
top_titles = top_titles.reset_index()
top_titles = top_titles['index'].as_matrix()
top_titles = frozenset(top_titles)
def emp_other(x):
if x in top_titles:
return x
else:
return 'Other'
df['emp_title'] = df['emp_title'].apply(emp_other)
df['emp_title'].value_counts()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
#random line in first row
!tail LoanStats_2018Q4.csv
###Output
_____no_output_____
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2, engine='python')
df.shape
df.head()
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.tail()
df.columns.tolist()
#backend of data imported needs to be checked! Add to checklist!
df.isnull().sum().sort_values()
#gives you a count of the nulls sorted low to high
df.tail()
df = pd.read_csv('LoanStats_2018Q4.csv',skiprows=1,skipfooter=2,engine='python')
#engine = pyton added to correct ParserWarningParserWarning: Falling back to the 'python' engine because the 'c' engine does not support skipfooter; you can avoid this warning by specifying engine='python'.
#"""Entry point for launching an IPython kernel.
df.head()
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.head()
df.head().T
df.tail()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Convert `int_rate`Define a function to remove percent signs from strings and convert to floats
###Code
'13.56%'.strip('%')
#dollar signs & percent signs will probably need to be stripped
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column
###Code
type(('13.56%').strip('%'))
float('13.56%'.strip('%'))
type(float('13.56%'.strip('%')))
df['int_rate']
df['int_rate'].str.strip('%').head
df['int_rate'] = df['int_rate'].str.strip('%').astype(float)
df['int_rate'].str.strip('%').astype(float).head()
type(float('13.56%'.strip('%')))
df['int_rate'].head()
(df['int_rate']/100).head()
df['int_rate'].head()
#define a functo toremove percetn signs from strings
string = '13.56%'
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
df['int_rate'] = df['int_rate'].apply(remove_percent)
df['int_rate'] = df['int_rate'].apply(remove_percent)
df['int_rate'].apply(remove_percent)
df['int_rate'] = df['int_rate'].str.strip('%').astype(float)
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
df['emp_title'].head(20)
#Unknown was NaN
df['emp_title'].value_counts().head(20)
#employee titles
###Output
_____no_output_____
###Markdown
**How often is emp_title null?**
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values- Capitalize with .title()- Strip spaces with .strip()- Replace NaN with 'Unknown'- x = parameter- np.nan is a numpy removed with else: in theform of Unknown- need to learn about np.nan- Not on Sprint Challenge, need to add to checklist
###Code
import numpy as np
examples = ['owner', 'Supervisor ',
' Project Manager', np.nan]
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
[clean_title(x) for x in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].head(10)
#Note that IT became It
#Note that III became Iii
#Not sure if Coe should be COE
#Probably due to .title() applied to each word
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values
###Code
df['emp_title'].value_counts().head(20)
#Note again, that RN is Rn
#Unknown replaces NaN. NaN will not show up in .value_counts()
#You can combine Rn and Registered Nurse.
#Best to combine BEFORE applying mass code for Title/Strip/NaN issues.
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
#creating a way to find out if manager is in the string
#find it by using str.contains('Manager')
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].head(10)
df['emp_title_manager'].value_counts(normalize=True)
#see what interest rates titles with managers have
#groupby code with () and [] then applied the .mean() for averages
df.groupby('emp_title_manager')['int_rate'].mean()
#nunique is code for number of unique values
#number of unique titles is 34,902 different titles
df['emp_title'].nunique()
#this will show you every title and the number of times each title occurs using .value_counts()
df['emp_title'].value_counts()
#Gets rid of nulls for all values in dataset?
#Shows nulls in the dataset for each column in the dataset? More likely.
df.isnull().sum().sort_values(ascending=False) / len(df)
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head().values
df['issue_d'].describe()
#infer_datetime_format can speed up parsing speed, hopefully.
#This always looks like a loop to me. Defining code with code?
#I guess you're using Pandas to modify the data and 'saving' it in updated
#format.
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df['issue_d'].describe()
#dt lets you filter all sorts of date/time/month information in datasets.
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.shape
df['issue_month'].sample(n=10).values
#Earliest Credit Line
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['earliest_cr_line'].head()
#Days from Issue Date to Earliest Credit Line
#dt.days
df['days_from_earliest_credit_to_issue'] = (
df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
#70-years from when they opened a credit card.
#Something would be wrong if there was 100+ years.
#Good to check for 'dirty data' and think about that for the checklist
25171 / 365
#search code for multiple columns that end in the letter '_d'
[col for col in df if col.endswith('_d')]
#apply code to multiple columns
#applied proper date time to the columns
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
df['term'].value_counts(dropna=False)
df['loan_status'].value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q3.csv.zip
!unzip LoanStats_2018Q3.csv.zip
!head LoanStats_2018Q3.csv
!tail LoanStats_2018Q3.csv
###Output
"","","12000","12000","12000"," 36 months"," 14.03%","410.31","C","C2","Medical Support Staff","10+ years","OWN","53414","Source Verified","Jul-2018","Current","n","","","home_improvement","Home improvement","136xx","NY","25.68","0","Mar-1997","0","","","8","0","6527","44.4%","20","w","10618.01","10618.01","2037.52","2037.52","1381.99","655.53","0.0","0.0","0.0","Dec-2018","410.31","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","29869","0","2","0","2","13","23342","73","0","0","3977","64","14700","1","1","0","2","3734","7173","47.6","0","0","139","255","42","13","0","66","","18","","0","2","2","5","9","8","6","12","2","8","0","0","0","0","100","40","0","0","46792","29869","13700","32092","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 16.46%","176.93","C","C5","Labor Worker","3 years","MORTGAGE","57000","Not Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","781xx","TX","33.09","0","Jul-2008","0","25","","11","0","7449","41.4%","47","w","4443.20","4443.20","875.51","875.51","556.80","318.71","0.0","0.0","0.0","Dec-2018","176.93","Jan-2019","Dec-2018","0","","1","Individual","","","","0","236","45088","1","5","3","4","7","37639","58","1","2","0","54","18000","3","10","1","6","4099","","","0","0","119","80","1","1","0","","","8","25","0","0","4","0","1","39","6","8","4","11","0","0","0","4","93.6","","0","0","83496","45088","0","65496","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 6.19%","152.55","A","A2","Host/cashier","4 years","RENT","22000","Not Verified","Jul-2018","Current","n","","","credit_card","Credit card refinancing","731xx","OK","11.67","0","Apr-2001","0","","","6","0","7118","23.3%","19","w","4359.64","4359.64","759.31","759.31","640.36","118.95","0.0","0.0","0.0","Dec-2018","152.55","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","7118","1","0","0","0","127","0","","1","1","3841","23","30500","1","0","0","1","1186","23382","23.3","0","0","168","206","3","3","0","3","","17","","0","3","3","6","10","4","6","15","3","6","0","0","0","1","100","0","0","0","30500","7118","30500","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","20000","20000","20000"," 36 months"," 15.49%","698.12","C","C4","Client manager","10+ years","MORTGAGE","80000","Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","913xx","CA","22.14","0","Apr-1994","0","43","","27","0","28994","63%","47","w","17742.70","17742.70","3456.18","3456.18","2257.30","1198.88","0.0","0.0","0.0","Dec-2018","698.12","Jan-2019","Dec-2018","0","43","1","Individual","","","","0","0","314886","0","1","0","0","37","15336","61","1","9","5067","64","46000","1","6","0","10","11662","3359","83.8","0","0","129","186","10","10","3","10","43","13","43","3","8","22","8","15","6","24","36","22","27","0","0","0","1","93.6","75","0","0","349879","47229","20700","25295","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 11.05%","327.63","B","B4","Operator","5 years","RENT","60000","Not Verified","Jul-2018","Current","n","","","credit_card","Credit card refinancing","952xx","CA","10.72","0","Apr-2015","0","27","","8","0","2301","28.1%","12","w","8800.60","8800.60","1628.94","1628.94","1199.40","429.54","0.0","0.0","0.0","Dec-2018","327.63","Jan-2019","Dec-2018","0","27","1","Individual","","","","0","0","9572","0","2","1","1","12","7271","60","3","4","1624","47","8200","0","0","1","5","1197","3099","42.6","0","0","34","38","9","9","0","11","","9","","1","3","3","5","6","3","6","8","3","8","0","0","0","4","100","20","0","0","20377","9572","5400","12177","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","4000","4000","4000"," 36 months"," 16.46%","141.54","C","C5","jewelry","5 years","RENT","9600","Source Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","925xx","CA","14.5","1","Nov-2014","0","15","","5","0","3545","45.4%","5","w","3554.58","3554.58","700.38","700.38","445.42","254.96","0.0","0.0","0.0","Dec-2018","141.54","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","3545","1","0","0","0","","0","","1","2","1515","45","7800","0","0","0","2","709","285","84.2","0","0","","43","1","1","0","33","","","15","0","1","3","1","1","0","5","5","3","5","0","0","0","1","80","100","0","0","7800","3545","1800","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2063142975
Total amount funded in policy code 2: 823319310
###Markdown
Load LendingClub data
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q3.csv',skiprows=1,skipfooter=2)
df.shape
df.head()
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.head().T
df['hardship_payoff_balance_amount'].isnull().sum()/len(df)
###Output
_____no_output_____
###Markdown
Work with strings
###Code
import numpy as np
def all_numeric(df):
return all((df.dtypes==np.number)|(df.dtypes==bool))
def no_nulls(df):
return not any(df.isnull().sum())
def ready_for_sklearn(df):
return all_numeric(df) and no_nulls(df)
ready_for_sklearn(df)
all_numeric(df)
df.select_dtypes('object').info()
df.int_rate.head().values
#Define a function to remove percent signs from strings and convert to floats
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
df['int_rate'] = df['int_rate'].apply(remove_percent)
df['int_rate'].head()
#employee title
df['emp_title'].value_counts().head(20)
df['emp_title'].value_counts().head(20).index
df['emp_title'].isnull().sum()
examples = ['owner','Supervisor ',' Project manager', 42, np.nan]
def clean_title(x):
if isinstance(x,str):
return x.strip().title()
else:
return 'Unknown'
for example in examples:
print(clean_title(example))
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].value_counts().head(20)
#to square int_rate write it like this
df.int_rate**2
#create emp_title_manager column
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts()
###Output
_____no_output_____
###Markdown
Work with dates
###Code
df['issue_d'].head().values
df['issue_d'] = pd.to_datetime(df['issue_d'],infer_datetime_format=True)
df['issue_d'].head().values
df['issue_d'].describe()
df['issue_year']=df['issue_d'].dt.year
df['issue_month']=df['issue_d'].dt.month
df['issue_month'].sample(n=10).values
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],infer_datetime_format=True)
df['days_from_earliest_credit_to_issue'] = (df['issue_d']-df['earliest_cr_line']).dt.days
[col for col in df if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col],infer_datetime_format=True)
###Output
_____no_output_____
###Markdown
Load Instacart dataLet's return to the dataset of [3 Million Instacart Orders](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) If necessary, uncomment and run the cells below to re-download and extract the data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
###Output
_____no_output_____
###Markdown
Here's a list of the six CSV filenames
###Code
%cd instacart_2017_05_01
!ls -lh
###Output
_____no_output_____
###Markdown
Load the CSV files with pandas
###Code
###Output
_____no_output_____
###Markdown
Do feature engineering
###Code
###Output
_____no_output_____
###Markdown
ASSIGNMENTReplicate the lesson code.Convert the term column from string to integer.Make a column named loan_status_is_great. It should contain the integer 1 if loan_status is "Current" or "Fully Paid." Else it should contain the integer 0.Make last_pymnt_d_month and last_pymnt_d_year columns.
###Code
df.head()
def remove_months(string):
return float(string.strip('months'))
df['term'].value_counts().head(20)
df['term'] = df['term'].apply(remove_months)
#Convert the term column from string to integer.
df['term'].head()
#Make a column named loan_status_is_great. It should contain the integer 1 if loan_status is "Current" or "Fully Paid." Else it should contain the integer 0.
df['loan_status_is_great']=df['loan_status'].apply(lambda x: 1 if x == 'Current' or x == 'Fully Paid' else 0)
df['loan_status'].value_counts()
df['loan_status_is_great'].value_counts()
#Make last_pymnt_d_month and last_pymnt_d_year columns.
df['last_pymnt_d_month']=df['last_pymnt_d'].dt.month
df['last_pymnt_d_year']=df['last_pymnt_d'].dt.year
df['last_pymnt_d_year'].sample(n=10).values
df['last_pymnt_d_month'].sample(n=10).values
df['last_pymnt_d_month'].value_counts()
df['last_pymnt_d_year'].value_counts()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.Modify the emp_title column to replace titles with 'Other' if the title is not in the top 20.Process the dataframe so that ready_for_sklearn(df) returns True. You can drop columns, or select the subset of numeric columns with no missing values. (Or you can try automating the process to handle missing values and convert objects to numbers!)Take initiatve and work on your own ideas!Instacart options:Read Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera, especially the Feature Engineering section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)Read and replicate parts of Simple Exploration Notebook - Instacart. (It's the Python Notebook with the most upvotes for this Kaggle competition.)Take initiative and work on your own ideas!
###Code
#There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.
df['revol_util'].head()
toptwenty = df['emp_title'].value_counts().head(20).tolist()
top_twenty_jobs = list(df['emp_title'].value_counts().head(20).index)
df['emp_title'] = np.where(df['emp_title'].isin(top_twenty_jobs), df['emp_title'], 'Other')
#Modify the emp_title column to replace titles with 'Other' if the title is not in the top 20.
df['emp_title'].value_counts()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q3.csv.zip
!unzip LoanStats_2018Q3.csv.zip
!head LoanStats_2018Q3.csv
###Output
Notes offered by Prospectus (https://www.lendingclub.com/info/prospectus.action)
"id","member_id","loan_amnt","funded_amnt","funded_amnt_inv","term","int_rate","installment","grade","sub_grade","emp_title","emp_length","home_ownership","annual_inc","verification_status","issue_d","loan_status","pymnt_plan","url","desc","purpose","title","zip_code","addr_state","dti","delinq_2yrs","earliest_cr_line","inq_last_6mths","mths_since_last_delinq","mths_since_last_record","open_acc","pub_rec","revol_bal","revol_util","total_acc","initial_list_status","out_prncp","out_prncp_inv","total_pymnt","total_pymnt_inv","total_rec_prncp","total_rec_int","total_rec_late_fee","recoveries","collection_recovery_fee","last_pymnt_d","last_pymnt_amnt","next_pymnt_d","last_credit_pull_d","collections_12_mths_ex_med","mths_since_last_major_derog","policy_code","application_type","annual_inc_joint","dti_joint","verification_status_joint","acc_now_delinq","tot_coll_amt","tot_cur_bal","open_acc_6m","open_act_il","open_il_12m","open_il_24m","mths_since_rcnt_il","total_bal_il","il_util","open_rv_12m","open_rv_24m","max_bal_bc","all_util","total_rev_hi_lim","inq_fi","total_cu_tl","inq_last_12m","acc_open_past_24mths","avg_cur_bal","bc_open_to_buy","bc_util","chargeoff_within_12_mths","delinq_amnt","mo_sin_old_il_acct","mo_sin_old_rev_tl_op","mo_sin_rcnt_rev_tl_op","mo_sin_rcnt_tl","mort_acc","mths_since_recent_bc","mths_since_recent_bc_dlq","mths_since_recent_inq","mths_since_recent_revol_delinq","num_accts_ever_120_pd","num_actv_bc_tl","num_actv_rev_tl","num_bc_sats","num_bc_tl","num_il_tl","num_op_rev_tl","num_rev_accts","num_rev_tl_bal_gt_0","num_sats","num_tl_120dpd_2m","num_tl_30dpd","num_tl_90g_dpd_24m","num_tl_op_past_12m","pct_tl_nvr_dlq","percent_bc_gt_75","pub_rec_bankruptcies","tax_liens","tot_hi_cred_lim","total_bal_ex_mort","total_bc_limit","total_il_high_credit_limit","revol_bal_joint","sec_app_earliest_cr_line","sec_app_inq_last_6mths","sec_app_mort_acc","sec_app_open_acc","sec_app_revol_util","sec_app_open_act_il","sec_app_num_rev_accts","sec_app_chargeoff_within_12_mths","sec_app_collections_12_mths_ex_med","sec_app_mths_since_last_major_derog","hardship_flag","hardship_type","hardship_reason","hardship_status","deferral_term","hardship_amount","hardship_start_date","hardship_end_date","payment_plan_start_date","hardship_length","hardship_dpd","hardship_loan_status","orig_projected_additional_accrued_interest","hardship_payoff_balance_amount","hardship_last_payment_amount","disbursement_method","debt_settlement_flag","debt_settlement_flag_date","settlement_status","settlement_date","settlement_amount","settlement_percentage","settlement_term"
"","","20000","20000","20000"," 60 months"," 17.97%","507.55","D","D1","Motor Vehicle Operator","7 years","RENT","68000","Source Verified","Sep-2018","Current","n","","","debt_consolidation","Debt consolidation","254xx","WV","15.8","0","Feb-2009","1","30","118","11","1","13483","69.1%","26","w","19580.78","19580.78","975.17","975.17","419.22","555.95","0.0","0.0","0.0","Dec-2018","507.55","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","194908","0","2","0","2","17","45299","93","0","1","9815","86","19500","1","4","2","5","19491","2185","81.8","0","0","115","94","13","7","4","25","","2","30","0","1","3","2","4","11","8","10","3","11","0","0","0","1","92.3","50","1","0","205625","58782","12000","48733","","","","","","","","","","","","N","","","","","","","","","","","","","","","DirectPay","N","","","","","",""
"","","25000","25000","25000"," 60 months"," 13.56%","576.02","C","C1","Firefighter","7 years","MORTGAGE","61000","Verified","Sep-2018","Current","n","","","major_purchase","Major purchase","770xx","TX","16.74","0","Jan-2009","0","","","5","0","6299","48.5%","14","w","24109.45","24109.45","1567.98","1567.98","890.55","677.43","0.0","0.0","0.0","Dec-2018","576.02","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","204007","1","2","0","1","19","13862","37","0","0","0","40","13000","0","7","4","2","40801","1500","0","0","0","54","116","26","3","1","74","","4","","0","0","1","1","2","5","2","8","1","5","0","0","0","1","100","0","0","0","234573","20161","1500","37273","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","30000","30000","30000"," 36 months"," 18.94%","1098.78","D","D2","","< 1 year","RENT","100000","Source Verified","Sep-2018","Current","n","","","debt_consolidation","Debt consolidation","300xx","GA","16.07","0","Mar-2008","1","","114","6","1","14574","70.1%","9","w","28739.57","28739.57","2134.43","2134.43","1260.43","874.00","0.0","0.0","0.0","Dec-2018","1098.78","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","44048","1","2","1","1","1","29474","69","1","2","8521","69","20800","1","0","2","3","8810","5226","73.6","0","0","126","104","11","1","0","17","","1","","0","2","2","2","2","4","4","5","2","6","0","0","0","2","100","0","1","0","63636","44048","19800","42836","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","6000","6000","6000"," 36 months"," 7.84%","187.58","A","A4","","n/a","RENT","30000","Not Verified","Sep-2018","Current","n","","","debt_consolidation","Debt consolidation","923xx","CA","5.44","0","Apr-2000","0","","104","8","1","5936","34.5%","11","w","5702.27","5702.27","369.93","369.93","297.73","72.20","0.0","0.0","0.0","Dec-2018","187.58","Jan-2019","Dec-2018","0","","1","Individual","","","","0","350","5936","0","0","0","1","23","0","","1","4","2913","35","17200","2","0","0","5","848","7698","35.9","0","0","139","221","7","7","0","7","","18","","0","2","3","4","4","2","8","9","3","8","0","0","0","1","100","33.3","1","0","17200","5936","12000","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","DirectPay","N","","","","","",""
"","","10650","10650","10650"," 36 months"," 7.84%","332.95","A","A4","","n/a","RENT","28000","Verified","Sep-2018","Current","n","","","medical","Medical expenses","430xx","OH","16.89","0","Nov-2002","0","","","3","0","37","0.3%","3","w","10121.54","10121.54","656.62","656.62","528.46","128.16","0.0","0.0","0.0","Dec-2018","332.95","Jan-2019","Dec-2018","0","","1","Joint App","43000","14.01","Source Verified","0","0","18254","0","1","0","1","16","18217","81","0","0","0","54","11500","1","1","0","1","6085","","","0","0","16","190","113","16","0","","","16","","0","0","1","0","0","1","2","2","2","3","0","0","0","0","100","","0","0","33876","18254","0","22376","2024","Oct-1996","0","0","8","18.7","1","9","0","0","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","9000","9000","9000"," 36 months"," 6.11%","274.25","A","A1","","n/a","OWN","25000","Not Verified","Sep-2018","Current","n","","","credit_card","Credit card refinancing","224xx","VA","12.1","0","Apr-1999","0","56","","6","0","8996","33.8%","18","w","8541.98","8541.98","550.03","550.03","458.02","92.01","0.0","0.0","0.0","Dec-2018","274.25","Jan-2019","Dec-2018","0","56","1","Individual","","","","0","160","8996","0","0","0","0","66","0","","0","1","6920","34","26600","0","0","0","1","1799","13466","35.3","0","0","66","233","14","14","2","77","","14","56","2","2","4","3","4","1","6","15","4","6","0","0","0","0","83.3","0","0","0","26600","8996","20800","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","DirectPay","N","","","","","",""
"","","20000","20000","20000"," 36 months"," 6.67%","614.53","A","A2","General Manager","< 1 year","MORTGAGE","125000","Not Verified","Sep-2018","Current","n","","","house","Home buying","750xx","TX","15.55","1","Jul-2003","0","21","","24","0","5832","11.4%","36","w","18990.48","18990.48","1214.24","1214.24","1009.52","204.72","0.0","0.0","0.0","Dec-2018","614.53","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","63577","2","1","0","1","22","54244","","2","4","3048","17","51100","1","0","6","6","3027","36050","11.9","0","0","148","182","6","6","2","24","21","1","21","0","5","6","16","19","6","22","26","6","24","0","0","0","2","97.2","7.7","0","0","133998","63577","40900","78764","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","20000","20000","20000"," 36 months"," 7.84%","625.26","A","A4","Teacher","10+ years","MORTGAGE","48186","Not Verified","Sep-2018","Current","n","","","debt_consolidation","Debt consolidation","641xx","MO","24.68","0","Aug-1996","0","","","17","0","23822","58.8%","38","w","18506.50","18506.50","1803.14","1803.14","1493.50","309.64","0.0","0.0","0.0","Dec-2018","625.26","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","191246","0","9","0","2","16","14800","48","1","1","11800","54","40500","1","0","0","3","11250","13078","57.8","0","0","265","239","8","8","2","8","","16","","0","3","4","6","7","27","7","9","4","17","0","0","0","1","100","16.7","0","0","234182","38622","31000","30832","","","","","","","","","","","","N","","","","","","","","","","","","","","","DirectPay","N","","","","","",""
###Markdown
Load LendingClub data
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q3.csv', skiprows=1, skipfooter=2)
df.shape
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.head().T
df['hardship_payoff_balance_amount'].isnull().sum()/len(df) #99% values are missing
###Output
_____no_output_____
###Markdown
Work with strings
###Code
import numpy as np
def all_numeric(df):
return all((df.dtypes==np.number) | #if all are number or all are bool
(df.dtypes==bool))
def no_nulls(df):
return not any(df.isnull().sum()) #dont return any nulls
def ready_fpr_sklearn(df):
return all_numeric(df) and no_nulls(df) #these conditions are needed for scikit learn
ready_fpr_sklearn(df)
all_numeric(df)
df.select_dtypes('object').info() #37 objects
df.int_rate.head() #str because of percents
def remove_percent(string):
return float(string.replace('%', ''))
#deal with missing values before applying functions, if any
df['int_rate'] = df['int_rate'].apply(remove_percent)
df.int_rate.head()
#Clean emp_title
#look at top 20 titles
df['emp_title'].value_counts().head(20) #redundant managers, owner, supervisor
#how often is emp_title null?
df['emp_title'].isnull().sum()
type(df['emp_title'][0])
#clean the title and handle missing values
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].isnull().sum()
df['emp_title'].value_counts().head(20)
#create emp_title_manager
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts()
###Output
_____no_output_____
###Markdown
Work with dates
###Code
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].describe()
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df['issue_month'].sample(n=10).values
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
(df['issue_d'] - df['earliest_cr_line']).head()
df['days_from_earliest_credit_to_issue'] = (
df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].head().values
[col for col in df if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col],
infer_datetime_format=True)
###Output
_____no_output_____
###Markdown
ASSIGNMENT - Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
#making df[loan_status_is_great]
print(df['loan_status'].value_counts())
type(df['loan_status'].str.contains('Fully Paid'))
#as this is a Series it can be added as a new col without any more work
df['loan_status_is_great'] = df['loan_status'].str.contains('Fully Paid') | df['loan_status'].str.contains('Current')
df['loan_status_is_great'].value_counts()
print(df['loan_status'].str.contains('Fully Paid').sum()
+df['loan_status'].str.contains('Current').sum())
#confirmation True = Fully paid + Current
#Converting term from string to int
def clean(str):
return str.replace(' months', '')
df.term = clean(df.term)
df.term = df.term.astype(int)
print(df['term'].value_counts())
print(type(df.term[0]))
#Making df.last_payment_d_year and df.last_payment_d_month
df['last_pymnt_d'].value_counts()
df['last_payment_d_year'] = df['last_pymnt_d'].dt.year
df['last_payment_d_month'] = df['last_pymnt_d'].dt.month
print(df['last_payment_d_year'].value_counts())
print(df['last_payment_d_month'].value_counts())
###Output
2018.0 128048
Name: last_payment_d_year, dtype: int64
12.0 116465
11.0 7793
10.0 1594
9.0 1136
8.0 838
7.0 222
Name: last_payment_d_month, dtype: int64
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! If necessary, uncomment and run the cells below to re-download and extract the data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
#%cd instacart_2017_05_01
#!ls -lh
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q3.csv.zip
!unzip LoanStats_2018Q3.csv.zip
!head LoanStats_2018Q3.csv
!tail LoanStats_2018Q3.csv
###Output
"","","12000","12000","12000"," 36 months"," 14.03%","410.31","C","C2","Medical Support Staff","10+ years","OWN","53414","Source Verified","Jul-2018","Current","n","","","home_improvement","Home improvement","136xx","NY","25.68","0","Mar-1997","0","","","8","0","6527","44.4%","20","w","10618.01","10618.01","2037.52","2037.52","1381.99","655.53","0.0","0.0","0.0","Dec-2018","410.31","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","29869","0","2","0","2","13","23342","73","0","0","3977","64","14700","1","1","0","2","3734","7173","47.6","0","0","139","255","42","13","0","66","","18","","0","2","2","5","9","8","6","12","2","8","0","0","0","0","100","40","0","0","46792","29869","13700","32092","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 16.46%","176.93","C","C5","Labor Worker","3 years","MORTGAGE","57000","Not Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","781xx","TX","33.09","0","Jul-2008","0","25","","11","0","7449","41.4%","47","w","4443.20","4443.20","875.51","875.51","556.80","318.71","0.0","0.0","0.0","Dec-2018","176.93","Jan-2019","Dec-2018","0","","1","Individual","","","","0","236","45088","1","5","3","4","7","37639","58","1","2","0","54","18000","3","10","1","6","4099","","","0","0","119","80","1","1","0","","","8","25","0","0","4","0","1","39","6","8","4","11","0","0","0","4","93.6","","0","0","83496","45088","0","65496","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 6.19%","152.55","A","A2","Host/cashier","4 years","RENT","22000","Not Verified","Jul-2018","Current","n","","","credit_card","Credit card refinancing","731xx","OK","11.67","0","Apr-2001","0","","","6","0","7118","23.3%","19","w","4359.64","4359.64","759.31","759.31","640.36","118.95","0.0","0.0","0.0","Dec-2018","152.55","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","7118","1","0","0","0","127","0","","1","1","3841","23","30500","1","0","0","1","1186","23382","23.3","0","0","168","206","3","3","0","3","","17","","0","3","3","6","10","4","6","15","3","6","0","0","0","1","100","0","0","0","30500","7118","30500","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","20000","20000","20000"," 36 months"," 15.49%","698.12","C","C4","Client manager","10+ years","MORTGAGE","80000","Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","913xx","CA","22.14","0","Apr-1994","0","43","","27","0","28994","63%","47","w","17742.70","17742.70","3456.18","3456.18","2257.30","1198.88","0.0","0.0","0.0","Dec-2018","698.12","Jan-2019","Dec-2018","0","43","1","Individual","","","","0","0","314886","0","1","0","0","37","15336","61","1","9","5067","64","46000","1","6","0","10","11662","3359","83.8","0","0","129","186","10","10","3","10","43","13","43","3","8","22","8","15","6","24","36","22","27","0","0","0","1","93.6","75","0","0","349879","47229","20700","25295","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 11.05%","327.63","B","B4","Operator","5 years","RENT","60000","Not Verified","Jul-2018","Current","n","","","credit_card","Credit card refinancing","952xx","CA","10.72","0","Apr-2015","0","27","","8","0","2301","28.1%","12","w","8800.60","8800.60","1628.94","1628.94","1199.40","429.54","0.0","0.0","0.0","Dec-2018","327.63","Jan-2019","Dec-2018","0","27","1","Individual","","","","0","0","9572","0","2","1","1","12","7271","60","3","4","1624","47","8200","0","0","1","5","1197","3099","42.6","0","0","34","38","9","9","0","11","","9","","1","3","3","5","6","3","6","8","3","8","0","0","0","4","100","20","0","0","20377","9572","5400","12177","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","4000","4000","4000"," 36 months"," 16.46%","141.54","C","C5","jewelry","5 years","RENT","9600","Source Verified","Jul-2018","Current","n","","","debt_consolidation","Debt consolidation","925xx","CA","14.5","1","Nov-2014","0","15","","5","0","3545","45.4%","5","w","3554.58","3554.58","700.38","700.38","445.42","254.96","0.0","0.0","0.0","Dec-2018","141.54","Jan-2019","Dec-2018","0","","1","Individual","","","","0","0","3545","1","0","0","0","","0","","1","2","1515","45","7800","0","0","0","2","709","285","84.2","0","0","","43","1","1","0","33","","","15","0","1","3","1","1","0","5","5","3","5","0","0","0","1","80","100","0","0","7800","3545","1800","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2063142975
Total amount funded in policy code 2: 823319310
###Markdown
Load LendingClub data pandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q3.csv', skiprows=1, skipfooter=2)
df.shape
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.head().T # .T transposes the df
# Some columns very sparse -- This column is 99% blank
df['hardship_payoff_balance_amount'].isnull().sum() / len(df)
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers
###Code
import numpy as np
def all_numeric(df):
return all ((df.dtypes==np.number) |
(df.dtypes==bool))
def no_nulls(df):
return not any(df.isnull().sum())
def ready_for_sklearn(df):
return all_numeric(df) and no_nulls(df)
# See which columns are numbers
df.dtypes==np.number
# Using all function to see if all columns are numbers
all(df.dtypes==np.number)
# Seeing if df is ready for ML -- it's not
ready_for_sklearn(df)
# Seeing if df is all numeric -- it's not
all_numeric(df)
###Output
_____no_output_____
###Markdown
We can get info which columns have dtype or object (string)
###Code
# Looking at all columns that are objects
df.select_dtypes('object').info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 128194 entries, 0 to 128193
Data columns (total 37 columns):
term 128194 non-null object
int_rate 128194 non-null object
grade 128194 non-null object
sub_grade 128194 non-null object
emp_title 114757 non-null object
emp_length 117807 non-null object
home_ownership 128194 non-null object
verification_status 128194 non-null object
issue_d 128194 non-null object
loan_status 128194 non-null object
pymnt_plan 128194 non-null object
purpose 128194 non-null object
title 128194 non-null object
zip_code 128194 non-null object
addr_state 128194 non-null object
earliest_cr_line 128194 non-null object
revol_util 128065 non-null object
initial_list_status 128194 non-null object
last_pymnt_d 128048 non-null object
next_pymnt_d 123298 non-null object
last_credit_pull_d 128190 non-null object
application_type 128194 non-null object
verification_status_joint 15729 non-null object
sec_app_earliest_cr_line 17785 non-null object
hardship_flag 128194 non-null object
hardship_type 13 non-null object
hardship_reason 13 non-null object
hardship_status 13 non-null object
hardship_start_date 13 non-null object
hardship_end_date 13 non-null object
payment_plan_start_date 13 non-null object
hardship_loan_status 13 non-null object
disbursement_method 128194 non-null object
debt_settlement_flag 128194 non-null object
debt_settlement_flag_date 5 non-null object
settlement_status 5 non-null object
settlement_date 5 non-null object
dtypes: object(37)
memory usage: 36.2+ MB
###Markdown
**Convert Int rate**
###Code
# looking at first 5 values of int_rate column -- they are strings
df.int_rate.head().values
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats
###Code
string = '17.97%' # Start with simple example
# string.replace('%', '')
# float(string.replace('%', '')) # wrap the above in float
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
###Output
_____no_output_____
###Markdown
Apply the function to the int_rate column
###Code
# Changing int_rate column to float -- raising error
df['int_rate'] = df['int_rate'].apply(remove_percent)
# Column is now floats
df['int_rate'].head()
###Output
_____no_output_____
###Markdown
**Clean emp_title**Look at top 20 titles
###Code
# Looking at first 20 emp_title values (descending)
df['emp_title'].value_counts().head(20)
# Getting closer look at formatting for these titles. One 'Supervisor' has extra space at end
df['emp_title'].value_counts().head(20).index
###Output
_____no_output_____
###Markdown
How often is emp_title null?
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean title and handle missing values
###Code
examples = ['owner', 'Supervisor ', ' Project manager', np.nan] # simple example
def clean_title(x):
if isinstance(x, str): # This checks if x is string-like
# if type(x) == str This checks if x is exactly string
return x.strip().title()
# elif np.nan:
# return 'Unknown'
else:
return 'Unknown'
for example in examples:
print(clean_title(example))
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].value_counts().head(20)
# To square int_rate, write like this:
df.int_rate ** 2
# Not like this:
# def square(x):
# return x ** 2
# df.int_rate.apply(square)
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
# Boolean column, True if 'emp_title' contains 'Manager', otherwise False
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
# Look at value counts of new emp_title_manager column
df['emp_title_manager'].value_counts()
# See that it's on the df
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Work with datespandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
# See first few values of issue_d
df['issue_d'].head().values
# Change issue_d column to datetime format
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
# Look at new values
df['issue_d'].head().values
df['issue_d'].describe()
# Creating two new columns with just the year and month of issue_d
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
# Check new column with random sample
df['issue_month'].sample(n=10).values
# See values of earliest_cr_line column
df['earliest_cr_line'].head().values
# Change column to datetime format
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'], infer_datetime_format=True)
# Creat new column by subtrating earliest_cr_line from issue_d, in days
df['days_from_earliest_credit_to_issue'] = (
df['issue_d'] - df['earliest_cr_line']).dt.days
# Looking at value of new column
df['days_from_earliest_credit_to_issue'].head().values
# List comprehension to see all columns that end in '_d'
[col for col in df if col.endswith('_d')]
# Taking above list and using another list comprehension to change all columns
# in list to datetime format
for col in ['issue_d', 'last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df['loan_status'].value_counts()
###Output
_____no_output_____
###Markdown
ASSIGNMENT - Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
df.head()
###Output
_____no_output_____
###Markdown
**Converting 'term' column from string to integer**
###Code
df.term.value_counts()
df.term.values
# Putting 'in months' in the term header for clarity
df = df.rename(index=str, columns={'term': 'term (in months)'})
test = '60 months' # Test example
def turn_to_int(string):
return int(string.replace('months', ''))
turn_to_int(test)
df['term (in months)'] = df['term (in months)'].apply(turn_to_int)
df['term (in months)'].values
###Output
_____no_output_____
###Markdown
**Making loan_status_is_great column**
###Code
df['loan_status'].value_counts()
test2 = 'Late (31-120 days)'
def status_great(string):
if string == 'Current':
return 1
elif string == 'Fully Paid':
return 1
else:
return 0
status_great(test2)
df['loan_status_is_great'] = df['loan_status'].apply(status_great)
df['loan_status_is_great'].value_counts()
df.head()
###Output
_____no_output_____
###Markdown
**Making last_pymnt_d_month and last_pymnt_d_year columns**
###Code
df['last_pymnt_d'].values
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
df.head()
df.shape
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Process the dataframe so that `ready_for_sklearn(df)` returns `True`. You can drop columns, or select the subset of numeric columns with no missing values. (Or you can try automating the process to handle missing values and convert objects to numbers!)- Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____
###Markdown
**More LendingClub**
###Code
df.head()
###Output
_____no_output_____
###Markdown
**Converting revol_util column to floats and handling missing values**
###Code
df['revol_util'].values
df['revol_util'].isnull().sum()
test3 = '10.6%'
def turn_to_float(string):
if isinstance(string, str):
return float(string.strip('%'))
elif string == np.nan:
return np.nan
turn_to_float(test3)
df['revol_util'] = df['revol_util'].apply(turn_to_float)
df['revol_util'].isnull().sum()
df['revol_util'].mean()
df['revol_util'] = df['revol_util'].fillna(df['revol_util'].mean)
df['revol_util'].isnull().sum()
###Output
_____no_output_____
###Markdown
**Modifying emp_title column to replace titles with 'Other' if the title is not in the top 20. **
###Code
top_twenty = df['emp_title'].value_counts().head(20).reset_index()
top_twenty
test4 = 'Basketball Player'
def replace_with_other(string):
if string in top_twenty['index'].tolist():
return string
else:
return 'Other'
replace_with_other(test4)
df['emp_title'] = df['emp_title'].apply(replace_with_other)
df.head()
###Output
_____no_output_____
###Markdown
**Processing dataframe so that ready_for_sklearn(df) returns True.**
###Code
# import numpy as np
# def all_numeric(df):
# return all ((df.dtypes==np.number) |
# (df.dtypes==bool))
# def no_nulls(df):
# return not any(df.isnull().sum())
# def ready_for_sklearn(df):
# return all_numeric(df) and no_nulls(df)
df.shape
df['id'].isnull().sum()
df = df.drop('id', axis=1)
df['member_id'].isnull().sum()
df = df.drop('member_id', axis=1)
df['loan_amnt'].isnull().sum()
df['funded_amnt'].isnull().sum(), df['funded_amnt'].dtype
df['funded_amnt_inv'].isnull().sum(), df['funded_amnt_inv'].dtype
df['int_rate'].isnull().sum(), df['int_rate'].dtype
df['installment'].isnull().sum(), df['installment'].dtype
df['emp_length'].isnull().sum(), df['emp_length'].dtype
df['emp_length'].value_counts()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Make featuresObjectives- understand the purpose of feature engineering- work with strings in pandas- work with dates and times in pandasLinks- [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)- Python Data Science Handbook - [Chapter 3.10](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html), Vectorized String Operations - [Chapter 3.11](https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html), Working with Time Series Get LendingClub data[Source](https://www.lendingclub.com/info/download-data.action)
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
###Output
"","","5600","5600","5600"," 36 months"," 13.56%","190.21","C","C1","","n/a","RENT","15600","Not Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","836xx","ID","15.31","0","Aug-2012","0","","97","9","1","5996","34.5%","11","w","5083.61","5083.61","750.29","750.29","516.39","233.90","0.0","0.0","0.0","Feb-2019","190.21","Mar-2019","Feb-2019","0","","1","Individual","","","","0","0","5996","0","0","0","1","20","0","","0","2","3017","35","17400","1","0","0","3","750","4689","45.5","0","0","20","73","13","13","0","13","","20","","0","3","5","4","4","1","9","10","5","9","0","0","0","0","100","25","1","0","17400","5996","8600","0","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","23000","23000","23000"," 36 months"," 15.02%","797.53","C","C3","Tax Consultant","10+ years","MORTGAGE","75000","Source Verified","Oct-2018","Charged Off","n","","","debt_consolidation","Debt consolidation","352xx","AL","20.95","1","Aug-1985","2","22","","12","0","22465","43.6%","28","w","0.00","0.00","1547.08","1547.08","1025.67","521.41","0.0","0.0","0.0","Dec-2018","797.53","","Nov-2018","0","","1","Individual","","","","0","0","259658","4","2","3","3","6","18149","86","4","6","12843","56","51500","2","2","5","11","21638","26321","44.1","0","0","12","397","4","4","6","5","22","4","22","0","4","5","7","14","3","9","19","5","12","0","0","0","7","96.4","14.3","0","0","296500","40614","47100","21000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 15.02%","346.76","C","C3","security guard","5 years","MORTGAGE","38000","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","443xx","OH","13.16","3","Jul-1982","0","6","","11","0","5634","37.1%","16","w","9096.85","9096.85","1378.7","1378.70","903.15","475.55","0.0","0.0","0.0","Feb-2019","346.76","Mar-2019","Feb-2019","0","","1","Individual","","","","0","155","77424","0","1","0","0","34","200","10","1","1","1866","42","15200","2","0","0","2","7039","4537","50.1","0","0","34","434","11","11","3","11","6","17","6","0","3","5","5","6","1","8","11","5","11","0","0","0","1","73.3","40","0","0","91403","9323","9100","2000","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","5000","5000","5000"," 36 months"," 13.56%","169.83","C","C1","Payoff Clerk","10+ years","MORTGAGE","35360","Not Verified","Oct-2018","Current","n","","","debt_consolidation","Debt consolidation","381xx","TN","11.3","1","Jun-2006","0","21","","9","0","2597","27.3%","15","f","4538.94","4538.94","675.55","675.55","461.06","214.49","0.0","0.0","0.0","Feb-2019","169.83","Mar-2019","Feb-2019","0","","1","Individual","","","","0","1413","69785","0","2","0","1","16","2379","40","3","4","1826","32","9500","0","0","1","5","8723","1174","60.9","0","0","147","85","9","9","2","10","21","9","21","0","1","3","2","2","6","6","7","3","9","0","0","0","3","92.9","50","0","0","93908","4976","3000","6028","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","9750"," 36 months"," 11.06%","327.68","B","B3","","n/a","RENT","44400","Source Verified","Oct-2018","Current","n","","","credit_card","Credit card refinancing","980xx","WA","11.78","0","Oct-2008","2","40","","15","0","6269","13.1%","25","f","9044.84","8818.72","1295.36","1262.98","955.16","340.20","0.0","0.0","0.0","Feb-2019","327.68","Mar-2019","Feb-2019","0","53","1","Individual","","","","0","520","16440","3","1","1","1","2","10171","100","2","5","404","28","47700","0","3","5","6","1265","20037","2.3","0","0","61","119","1","1","0","1","","1","40","1","2","4","6","8","3","14","22","4","15","0","0","0","3","92","0","0","0","57871","16440","20500","10171","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
"","","10000","10000","10000"," 36 months"," 16.91%","356.08","C","C5","Key Accounts Manager","2 years","RENT","80000","Not Verified","Oct-2018","Current","n","","","other","Other","021xx","MA","17.72","1","Sep-2006","0","14","","17","0","1942","30.8%","31","w","9120.98","9120.98","1414.93","1414.93","879.02","535.91","0.0","0.0","0.0","Feb-2019","356.08","Mar-2019","Feb-2019","0","25","1","Individual","","","","0","0","59194","0","15","1","1","12","57252","85","0","0","1942","80","6300","0","5","0","1","3482","2058","48.5","0","0","144","142","40","12","0","131","30","","30","3","1","1","1","5","22","2","9","1","17","0","0","0","1","74.2","0","0","0","73669","59194","4000","67369","","","","","","","","","","","","N","","","","","","","","","","","","","","","Cash","N","","","","","",""
Total amount funded in policy code 1: 2050909275
Total amount funded in policy code 2: 820109297
###Markdown
Load LendingClub datapandas documentation- [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)- [`options.display`](https://pandas.pydata.org/pandas-docs/stable/options.htmlavailable-options)
###Code
import pandas as pd
df = pd.read_csv('LoanStats_2018Q4.csv', skiprows=1, skipfooter=2, engine='python')
pd.options.display.max_columns = 500
pd.options.display.max_rows = 500
df.tail()
###Output
_____no_output_____
###Markdown
Work with strings For machine learning, we usually want to replace strings with numbers.We can get info about which columns have a datatype of "object" (strings)
###Code
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Convert `int_rate`
###Code
'13.56%'.strip('%')
type('13.56%'.strip('%'))
float('13.56%'.strip('%'))
type(float('13.56%'.strip('%')))
df['int_rate'] = df['int_rate'].str.strip('%').astype(float)
(df['int_rate'] / 100).head()
###Output
_____no_output_____
###Markdown
Define a function to remove percent signs from strings and convert to floats
###Code
string = '13.56%'
def remove_percent(string):
return float(string.strip('%'))
remove_percent(string)
###Output
_____no_output_____
###Markdown
Apply the function to the `int_rate` column**As Ryan said during the lecture, this only works if you don't apply the above methods. If all cells are run, the following two will return an AttributeError**
###Code
df['int_rate'] = df['int_rate'].apply(remove_percent)
df['int_rate'].apply(remove_percent)
###Output
_____no_output_____
###Markdown
Clean `emp_title`Look at top 20 titles
###Code
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
How often is `emp_title` null?
###Code
df['emp_title'].isnull().sum()
###Output
_____no_output_____
###Markdown
Clean the title and handle missing values* Capitalize* Strip spaces* Replace NaN with 'Unknown'
###Code
import numpy as np
examples = ['owner', 'Supervisor ',
' Project Manager', np.nan]
def clean_title(x):
if isinstance(x, str):
return x.strip().title()
else:
return 'Unknown'
[clean_title(x) for x in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
df['emp_title'].head(10)
df['emp_title'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Create `emp_title_manager`pandas documentation: [`str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html)
###Code
df['emp_title_manager'] = df['emp_title'].str.contains('Manager')
df['emp_title_manager'].value_counts(normalize=True)
df.groupby('emp_title_manager')['int_rate'].mean()
df['emp_title'].nunique()
df['emp_title'].value_counts()
df.isnull().sum().sort_values(ascending=False) / len(df)
###Output
_____no_output_____
###Markdown
Work with dates pandas documentation- [to_datetime](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html)- [Time/Date Components](https://pandas.pydata.org/pandas-docs/stable/timeseries.htmltime-date-components) "You can access these properties via the `.dt` accessor"
###Code
df['issue_d'].head().values
df['issue_d'].describe()
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df['issue_d'].describe()
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.shape
df['issue_month'].sample(n=10).values
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['earliest_cr_line'].head()
df['days_from_earliest_credit_to_issue'] = (
df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
25171 / 365
[col for col in df if col.endswith('_d')]
for col in ['last_pymnt_d', 'next_pymnt_d', 'last_credit_pull_d']:
df[col] = pd.to_datetime(df[col], infer_datetime_format=True)
df.describe(include='datetime')
###Output
_____no_output_____
###Markdown
ASSIGNMENT- Replicate the lesson code.- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
# Convert term column from string to integer
df['term'] = df['term'].str.strip(' months').astype(int)
# Just checking out this column...
df['loan_status'].head()
# Create new column loan_status_is_great
df['loan_status_is_great'] = df['loan_status'].str.contains('Current|Fully Paid')
# Encode True/False as 1/0
df['loan_status_is_great'] = df['loan_status_is_great'].astype(int)
df['loan_status_is_great'].head(10000)
# Examining data
df['last_pymnt_d'].head(100)
# Creating two new columns
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
df['last_pymnt_d_month'].head()
df['last_pymnt_d_year'].head()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____ |
ResourceWatchCode/Notebooks for Exploring Specific Datasets/CDIAC Play.ipynb | ###Markdown
Load Raw Data from S3
###Code
# These four files are derived from the original CDIAC data sheet
# They were initially cleaned (using code outlined at the bottom of this notebook)
# And then uploaded to Amazon S3
file_names = ['Territorial Emissions GCB',
'Consumption Emissions GCB',
'Emissions Transfers GCB',
'Territorial Emissions CDIAC']
# Initialize a dictionary to store the raw data
cdiac_raw_data = {}
# Load each of the raw datasets from S3
# Reference: https://stackoverflow.com/questions/37703634/how-to-import-a-text-file-on-aws-s3-into-pandas-without-writing-to-disk
for file in file_names:
cdiac_raw_data[file] = read_from_S3(s3_bucket, RAW_DATA+file+".csv")
#cdiac_raw_data["Territorial Emissions GCB"].head()
#cdiac_raw_data["Consumption Emissions GCB"].head()
#cdiac_raw_data["Emissions Transfers GCB"].head()
#cdiac_raw_data["Territorial Emissions CDIAC"].head()
###Output
_____no_output_____
###Markdown
Convert raw data to pct_change data for territorial and consumption emissions, load to S3
###Code
# Convert raw data to percent change by year from 2000 forward
territorial_emissions_abs_raw = cdiac_raw_data["Territorial Emissions GCB"]
territory_gcb_pct_change = territorial_emissions_abs_raw.loc[1999:2015].transpose().pct_change(axis=1).loc[:,2000:]
territory_gcb_abs_val = territorial_emissions_abs_raw.loc[2000:2015].transpose()
consumption_emissions_abs_raw = cdiac_raw_data["Consumption Emissions GCB"]
consumption_gcb_pct_change = consumption_emissions_abs_raw.loc[1999:2015].transpose().pct_change(axis=1).loc[:,2000:]
consumption_gcb_abs_val = consumption_emissions_abs_raw.loc[2000:2015].transpose()
# Upload these percent change figures to S3
write_to_S3(territory_gcb_pct_change, s3_bucket, PROCESSED_DATA + \
"Territorial Emissions GCB percent changes 2000-2015.csv")
write_to_S3(territory_gcb_abs_val, s3_bucket, PROCESSED_DATA + \
"Territorial Emissions GCB absolute values 2000-2015.csv")
write_to_S3(consumption_gcb_pct_change, s3_bucket, PROCESSED_DATA + \
"Consumption Emissions GCB percent changes 2000-2015.csv")
write_to_S3(consumption_gcb_abs_val, s3_bucket, PROCESSED_DATA + \
"Consumption Emissions GCB absolute values 2000-2015.csv")
###Output
_____no_output_____
###Markdown
Download Conversions used to align CDIAC, World Bank, and ISO3 country designations
###Code
# CDIAC names to World Bank names
cdiac_to_wb_name_conversion = read_from_S3(s3_bucket, CONVERSIONS+"CDIAC to World Bank name conversion.csv")
# World Bank names to ISO3 codes
wb_name_to_iso3_conversion = read_from_S3(s3_bucket, CONVERSIONS+"World Bank to ISO3 name conversion.csv")
###Output
_____no_output_____
###Markdown
Create final data for the D3 application by adding ISO3 codes to the CDIAC pct change data
###Code
# Download pct_change data from S3
territory_gcb_pct_change = read_from_S3(s3_bucket, PROCESSED_DATA+"Territorial Emissions GCB percent changes 2000-2015.csv")
consumption_gcb_pct_change = read_from_S3(s3_bucket, PROCESSED_DATA+"Consumption Emissions GCB percent changes 2000-2015.csv")
territory_gcb_abs_val = read_from_S3(s3_bucket, PROCESSED_DATA+"Territorial Emissions GCB absolute values 2000-2015.csv")
consumption_gcb_abs_val = read_from_S3(s3_bucket, PROCESSED_DATA+"Consumption Emissions GCB absolute values 2000-2015.csv")
dfs = {"territory_pct":territory_gcb_pct_change,
"consumption_pct":consumption_gcb_pct_change,
"territory_abs":territory_gcb_abs_val,
"consumption_abs":consumption_gcb_abs_val}
# Name for Congo didn't match in the CDIAC data and crosswalk file
def replace_congo(name):
if name == "Congo":
return("Congo (Rep)")
else:
return(name)
# Add the wb_name to each dataframe
def fetch_name(name):
try:
return(cdiac_to_wb_name_conversion.loc[name][0])
except:
return(np.nan)
def add_iso(name):
try:
return(wb_name_to_iso3_conversion.loc[name,"ISO"])
except:
return(np.nan)
def create_summary_values_CDIAC(row, is_consumption_data=True):
if is_consumption_data:
val = row["2014"] - row["2000"]
return(val)
else:
val = row["2015"] - row["2000"]
return(val)
for df_name, df in dfs.items():
print(df_name)
df.index = list(map(replace_congo, df.index))
df["Country Name"] = list(map(fetch_name, df.index))
df = df.loc[pd.notnull(df["Country Name"])]
df = df.set_index("Country Name")
df["ISO"] = list(map(add_iso, df.index))
if("consumption" in df_name):
df["2000-2014"] = df.apply(lambda row: create_summary_values_CDIAC(row, True), axis=1)
else:
df["2000-2015"] = df.apply(lambda row: create_summary_values_CDIAC(row, False), axis=1)
dfs[df_name] = df
# territory_gcb_pct_change.index = list(map(replace_congo, territory_gcb_pct_change.index))
# consumption_gcb_pct_change.index = list(map(replace_congo, consumption_gcb_pct_change.index))
# territory_gcb_pct_change["Country Name"] = list(map(fetch_name, territory_gcb_pct_change.index)) #apply(lambda row: fetch_name(row.name), axis=1)
# consumption_gcb_pct_change["Country Name"] = list(map(fetch_name, consumption_gcb_pct_change.index)) # consumption_gcb_pct_change.apply(lambda row: fetch_name(row.name), axis=1)
# # Only keep the CDIAC data where there is a matching world bank country
# territory_gcb_pct_change = territory_gcb_pct_change.loc[pd.notnull(territory_gcb_pct_change["Country Name"])]
# consumption_gcb_pct_change = consumption_gcb_pct_change.loc[pd.notnull(consumption_gcb_pct_change["Country Name"])]
# # Set index to be World Bank name
# territory_gcb_pct_change = territory_gcb_pct_change.set_index("Country Name")
# consumption_gcb_pct_change = consumption_gcb_pct_change.set_index("Country Name")
# # Use wb_name to
# territory_gcb_pct_change["ISO"] = list(map(add_iso, territory_gcb_pct_change.index))
# consumption_gcb_pct_change["ISO"] = list(map(add_iso, consumption_gcb_pct_change.index))
# All countries have an assigned iso if these comes up empty dataframes
print(dfs["territory_pct"].loc[pd.isnull(dfs["territory_pct"]["ISO"])])
print(dfs["consumption_pct"].loc[pd.isnull(dfs["consumption_pct"]["ISO"])])
print(dfs["territory_abs"].loc[pd.isnull(dfs["territory_abs"]["ISO"])])
print(dfs["consumption_abs"].loc[pd.isnull(dfs["consumption_abs"]["ISO"])])
# Export final files
write_to_S3(dfs["territory_pct"], s3_bucket, FINAL_DATA + "Territory Emissions GCB percent changes with ISO3 2000-2015 plus summary data.csv")
write_to_S3(dfs["consumption_pct"], s3_bucket, FINAL_DATA + "Consumption Emissions GCB percent changes with ISO3 2000-2015 plus summary data.csv")
write_to_S3(dfs["territory_abs"], s3_bucket, FINAL_DATA + "Territory Emissions GCB absolute values with ISO3 2000-2015 plus summary data.csv")
write_to_S3(dfs["consumption_abs"], s3_bucket, FINAL_DATA + "Consumption Emissions GCB absolute values with ISO3 2000-2015 plus summary data.csv")
# Territory or Consumption?
emissions_type = "Territory"
# absolute values or percent changes?
metric = "absolute values"
df = read_from_S3(s3_bucket, FINAL_DATA + \
"{} Emissions GCB {} with ISO3 2000-2015 plus summary data.csv".format(emissions_type,metric) \
, index_col="ISO")
df
###Output
_____no_output_____
###Markdown
Process GDP and other WB indicator Data
###Code
data_names_and_codes = {'EG.ELC.ACCS.ZS': 'Access to electricity (% of population)',
'EG.FEC.RNEW.ZS': 'Renewable energy consumption (% of total final energy consumption)',
'IT.NET.USER.ZS': 'Individuals using the Internet (% of population)',
'NE.CON.PRVT.PC.KD': 'Household final consumption expenditure per capita (constant 2010 US$)',
'NV.IND.TOTL.KD': 'Industry, value added (constant 2010 US$)',
'NY.GDP.TOTL.RT.ZS': 'Total natural resources rents (% of GDP)',
'SG.GEN.PARL.ZS': 'Proportion of seats held by women in national parliaments (%)',
'SL.EMP.TOTL.SP.ZS': 'Employment to population ratio, 15+, total (%) (modeled ILO estimate)',
'SM.POP.NETM': 'Net migration',
'SP.DYN.LE00.IN': 'Life expectancy at birth, total (years)',
'SP.URB.TOTL.IN.ZS': 'Urban population (% of total)',
'TM.VAL.MRCH.CD.WT': 'Merchandise imports (current US$)',
'NY.GDP.MKTP.CD': 'GDP (current US$)'}
column_long_name_to_short_name = {
'Renewable energy consumption (% of total final energy consumption)': 'renewable_energy_consumption_of_total_final_energy_consumpti',
'Household final consumption expenditure per capita (constant 2010 US$)': 'household_final_consumption_expenditure_per_capita_constant_20',
'Merchandise imports (current US$)': 'merchandise_imports_current_us_tm_val_mrch_cd_wt',
'Industry, value added (constant 2010 US$)': 'industry_value_added_constant_2010_us_nv_ind_totl_kd',
'Access to electricity (% of population)': 'access_to_electricity_of_population_eg_elc_accs_zs',
'Urban population (% of total)': 'urban_population_of_total_sp_urb_totl_in_zs',
'Employment to population ratio, 15+, total (%) (modeled ILO estimate)': 'employment_to_population_ratio_15_total_modeled_ilo_est',
'Total natural resources rents (% of GDP)': 'total_natural_resources_rents_of_gdp_ny_gdp_totl_rt_zs',
'Life expectancy at birth, total (years)': 'life_expectancy_at_birth_total_years_sp_dyn_le00_in',
'Net migration': 'net_migration_sm_pop_netm',
'Proportion of seats held by women in national parliaments (%)': 'proportion_of_seats_held_by_women_in_national_parliaments',
'Individuals using the Internet (% of population)': 'individuals_using_the_internet_of_population_it_net_user_z',
'GDP (current US$)': 'GDP'
}
series_code_to_data_viz_name = {}
for key, value in data_names_and_codes.items():
series_code_to_data_viz_name[key] = column_long_name_to_short_name[value]
series_code_to_data_viz_name
indicators = series_code_to_data_viz_name
# ['EG.FEC.RNEW.ZS', 'NE.CON.PRVT.PC.KD', 'TM.VAL.MRCH.CD.WT',
# 'NV.IND.TOTL.KD', 'EG.ELC.ACCS.ZS', 'SP.URB.TOTL.IN.ZS',
# 'SL.EMP.TOTL.SP.ZS', 'NY.GDP.TOTL.RT.ZS', 'SP.DYN.LE00.IN',
# 'SM.POP.NETM', 'SG.GEN.PARL.ZS', 'IT.NET.USER.ZS']
all_world_bank_data = pd.DataFrame(columns=['1999', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007',
'2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016',
'ISO', 'Indicator'])
for indicator in indicators:
# Results are paginated
res = req.get("http://api.worldbank.org/countries/all/indicators/{}?date=1999:2016&format=json&per_page=10000".format(indicator))
data = pd.io.json.json_normalize(res.json()[1])
data = data[["country.value", "date", "value"]]
value_name = series_code_to_data_viz_name[indicator]
data.columns = ["Country Name", "Year", value_name]
data = data.pivot(index="Country Name", columns="Year", values=value_name).astype(float)
data["ISO"] = list(map(add_iso, data.index))
data = data.loc[pd.notnull(data["ISO"])]
data["Indicator"] = value_name
all_world_bank_data = all_world_bank_data.append(data)
all_world_bank_data.index.name = "Country Name"
all_world_bank_data.reset_index(inplace=True)
all_world_bank_data.columns.name = ""
reverse_map = {v: k for k, v in column_long_name_to_short_name.items()}
def create_summary_values_World_Bank(row):
#print(row)
indicator = reverse_map[row["Indicator"]]
if indicator == 'Renewable energy consumption (% of total final energy consumption)':
val = row["2014"] - row["2000"]
return(val)
elif indicator == 'GDP':
val = row["2015"] - row["2000"]
return(val)
elif indicator == 'Household final consumption expenditure per capita (constant 2010 US$)':
val = row["2015"]
return(val)
elif indicator == 'Merchandise imports (current US$)':
val = row["2015"] - row["2000"]
return(val)
elif indicator == 'Industry, value added (constant 2010 US$)':
val = row["2015"] - row["2000"]
return(val)
elif indicator == 'Access to electricity (% of population)':
val = row["2014"]
return(val)
elif indicator == 'Urban population (% of total)':
val = row["2015"]
return(val)
elif indicator == 'Employment to population ratio, 15+, total (%) (modeled ILO estimate)':
val = row["2015"] - row["2000"]
return(val)
elif indicator == 'Total natural resources rents (% of GDP)':
val = row["2015"]
return(val)
elif indicator == 'Life expectancy at birth, total (years)':
val = row["2015"]
return(val)
elif indicator == 'Net migration':
val = row["2012"]
return(val)
elif indicator == 'Proportion of seats held by women in national parliaments (%)':
val = row["2015"]
return(val)
elif indicator == 'Individuals using the Internet (% of population)':
val = row["2015"]
return(val)
all_world_bank_data["2000-2015"] = all_world_bank_data.apply(create_summary_values_World_Bank, axis=1)
all_world_bank_data.head()
write_to_S3(all_world_bank_data, s3_bucket, PROCESSED_DATA + "World Bank Data with ISO3, 1999-2016 with 2000-2015 Summary Values.csv")
###Output
_____no_output_____
###Markdown
Calculate Index Values
###Code
# Calculating index values
# formula = (1 – ΔCO2)*(1 + ΔGDP) - ΔCO2 + ΔGDP
world_bank_data = read_from_S3(s3_bucket, PROCESSED_DATA + "World Bank Data with ISO3, 1999-2016 with 2000-2015 Summary Values.csv")
tuples = list(zip(*[world_bank_data["Indicator"],world_bank_data["Country Name"]]))
multi_index = pd.MultiIndex.from_tuples(tuples, names=["Indicator", "Country Name"])
world_bank_data.index = multi_index
gdp_data = world_bank_data.loc["GDP"]
gdp_data.set_index(["ISO"], inplace=True)
year_columns = [str(yr) for yr in range(1999,2016)]
gdp_data_just_years = gdp_data[year_columns]
gdp_per_change_data = gdp_data_just_years.pct_change(axis=1)
#print(gdp_per_change_data.head())
def calc_index(co2, gdp):
return((1-co2)*(1+gdp) - co2 + gdp)
# import CO2 change
## Territorial
territory_emissions = read_from_S3(s3_bucket, FINAL_DATA + \
"Territory Emissions GCB percent changes with ISO3 2000-2015.csv", index_col = "ISO")
## Consumption
consumption_emissions = read_from_S3(s3_bucket, FINAL_DATA + \
"Consumption Emissions GCB percent changes with ISO3 2000-2015.csv", index_col = "ISO")
# https://stackoverflow.com/questions/22149584/what-does-axis-in-pandas-mean
territory_emissions_gdp_index = calc_index(territory_emissions.drop("Country Name", axis=1), gdp_per_change_data)
consumption_emissions_gdp_index = calc_index(consumption_emissions.drop("Country Name", axis=1), gdp_per_change_data)
write_to_S3(territory_emissions_gdp_index, s3_bucket, FINAL_DATA + \
"ICGGD calculated with Territory Emissions.csv")
write_to_S3(consumption_emissions_gdp_index, s3_bucket, FINAL_DATA + \
"ICGGD calculated with Consumption Emissions.csv")
### FOR HISTORICAL PURPOSES - I USED THE BELOW TO FORMAT THE DATA ###
os.chdir("/Users/nathansuberi/Desktop/WRI_Programming/Py_Scripts/Data Packs/Materials for Nate")
# Reference from above:
!mkdir temp
os.chdir("temp")
# In order for this to be work, sheet needs to be public
# Or "On, anyone with link can access"
# CDIAC Data
!curl "https://docs.google.com/spreadsheets/d/1vd7NWFmpXJsHNNERowemLEvSoQkpKYFF-AxshdQ3GIE/export#gid=318056468?format=xls" > cdiac_data.xls
dest = os.getcwd()
cdiac_data = pd.ExcelFile(dest + "/cdiac_data.xls")
os.chdir("..")
!rm -r temp
# cdiac_data
sheet_names = cdiac_data.sheet_names
cdiac_dataframes = {}
for name in sheet_names:
cdiac_dataframes[name] = cdiac_data.parse(name)
##### When you have the excel spreadsheet on your local system,
# access a specific sheet like so:
#list_names = cdiac_data.sheet_names
#zipped = zip(list_names, [0]*len(list_names))
#sheet_names = dict(zipped)
#print(sheet_names)
# For some strange reason, these only work
# if I don't make the dict above
#a, b = zip(*zipped)
#print(a)
#print(b)
cdiac_raw_data = {}
curr = "Territorial Emissions GCB"
tmp = cdiac_dataframes[curr]
# Remove first informational rows
tmp = tmp.iloc[14:tmp.shape[0],:]
# Replace nan in first row with "Year", set new index
new_index = ["Year"] + list(tmp.index)[1:]
tmp.index = new_index
# Make columns equal to country names
tmp.columns = tmp.iloc[0]
# Drop the row that had the country names
tmp = tmp.drop(["Year"])
# Drops all rows and columns that are completely null
tmp = tmp.dropna(axis=0, how="all")
tmp = tmp.dropna(axis=1, how="all")
# Remove three nan columns at end
# Remove summary rows at bottom
tmp = tmp.iloc[:-2,:-3]
tmp
cdiac_raw_data[curr] = tmp
curr = "Consumption Emissions GCB"
tmp = cdiac_dataframes[curr]
# Remove first informational rows
tmp = tmp.iloc[7:tmp.shape[0],:]
# Replace nan in first row with "Year", set new index
new_index = ["Year"] + list(tmp.index)[1:]
tmp.index = new_index
# Make columns equal to country names
tmp.columns = tmp.iloc[0]
# Drop the row that had the country names
tmp = tmp.drop(["Year"])
# Drops all rows and columns that are completely null
tmp = tmp.dropna(axis=0, how="all")
tmp = tmp.dropna(axis=1, how="all")
# Remove three nan columns at end
# Remove summary rows at bottom
tmp = tmp.iloc[:-2,:-3]
tmp
cdiac_raw_data[curr] = tmp
curr = "Emissions Transfers GCB"
tmp = cdiac_dataframes[curr]
# Remove first informational rows
tmp = tmp.iloc[7:tmp.shape[0],:]
# Replace nan in first row with "Year", set new index
new_index = ["Year"] + list(tmp.index)[1:]
tmp.index = new_index
# Make columns equal to country names
tmp.columns = tmp.iloc[0]
# Drop the row that had the country names
tmp = tmp.drop(["Year"])
# Drops all rows and columns that are completely null
tmp = tmp.dropna(axis=0, how="all")
tmp = tmp.dropna(axis=1, how="all")
# No need to remove anything
# tmp = tmp.iloc[:,:]
tmp
cdiac_raw_data[curr] = tmp
curr = "Territorial Emissions CDIAC"
tmp = cdiac_dataframes[curr]
# Remove first informational rows
tmp = tmp.iloc[13:tmp.shape[0],:]
# Replace nan in first row with "Year", set new index
new_index = ["Year"] + list(tmp.index)[1:]
tmp.index = new_index
# Make columns equal to country names
tmp.columns = tmp.iloc[0]
# Drop the row that had the country names
tmp = tmp.drop(["Year"])
# Drops all rows and columns that are completely null
tmp = tmp.dropna(axis=0, how="all")
tmp = tmp.dropna(axis=1, how="all")
# Remove one nan columns at end
# Remove 8 summary rows from end
tmp = tmp.iloc[:-8,:-1]
tmp
cdiac_raw_data[curr] = tmp
file_dest = os.getcwd()
#pickle.dump(cdiac_dataframes_clean, open(file_dest + "/clean_cdiac_dataframes.pkl", 'wb'))
for file in cdiac_dataframes_clean:
cdiac_raw_data[file].to_csv(file_dest+"/"+file+".csv")
# Original processing for the country name crosswalk
# Align the CDIAC and World Bank Country Names
country_names = pd.ExcelFile("/Users/nathansuberi/Downloads/country name crosswalk.xlsx").parse("Sheet1")
# Take the two slim sets that match with each other
slim_set = country_names.drop(["Country Names", "Unnamed: 1", "Unnamed: 3", "Unnamed: 5", "Unnamed: 6"], axis=1)
slim_set.columns = ["cdiac_names", "wb_names"]
slim_set = slim_set.iloc[3:-51]
slim_set.set_index("cdiac_names", inplace=True)
# See Bahamas for an example of a difference
print("Total number of countries:", len(slim_set), "\n")
print("Examples of some country names which differ:\n")
print(slim_set.iloc[38:42])
# Upload CDIAC to World Bank name conversions to RAW_DATA
csv_buffer = io.StringIO()
slim_set.to_csv(csv_buffer)
s3_resource.Object(s3_bucket, CONVERSIONS + \
"CDIAC to World Bank name conversion.csv").put(Body=csv_buffer.getvalue())
# Prepare World Bank to ISO3 name conversions
isos = pd.read_csv("/Users/nathansuberi/Desktop/WRI_Programming/world_bank_isos.csv", sep="\n", header=None)
isos = list(isos[0])
pairs = []
for ix, val in enumerate(isos):
if ix%3==0:
key = val
elif ix%3==1:
value = val
pairs.append([key,value])
real_isos = pd.DataFrame(pairs, columns=["Country", "ISO"])
real_isos.set_index("Country", inplace=True)
def replace_iso_names(name):
if name == "Brunei":
return("Brunei Darussalam")
elif name == "Cape Verde":
return("Cabo Verde")
elif name == "Ethiopia (excludes Eritrea)":
return("Ethiopia")
elif name == "Hong Kong, China":
return("Hong Kong SAR, China")
elif name == "Macao":
return("Macao SAR, China")
elif name == "Venezuela":
return("Venezuela, RB")
else:
return(name)
real_isos.index = map(replace_iso_names, real_isos.index)
real_isos.loc["Montenegro"] = "MNE"
real_isos.loc["Serbia"] = "SRB"
real_isos.loc["Romania"] = "ROM"
# Upload World Bank to ISO3 name conversions to RAW_DATA
csv_buffer = io.StringIO()
real_isos.to_csv(csv_buffer)
s3_resource.Object(s3_bucket, CONVERSIONS + \
"World Bank to ISO3 name conversion.csv").put(Body=csv_buffer.getvalue())
###Output
_____no_output_____
###Markdown
Investigate which countries are represented in the data, naming differences
###Code
def make_lower(array):
return([name.lower() for name in array])
subjects1 = make_lower(cdiac_raw_data["Territorial Emissions GCB"].columns.sort_values().values)
subjects2 = make_lower(cdiac_raw_data["Consumption Emissions GCB"].columns.sort_values().values)
subjects3 = make_lower(cdiac_raw_data["Emissions Transfers GCB"].columns.sort_values().values)
subjects4 = make_lower(cdiac_raw_data["Territorial Emissions CDIAC"].columns.sort_values().values)
print(len(subjects1))
print(len(subjects2))
print(len(subjects3))
print(len(subjects4))
# From subjects1
print("\nDifferences between subjects1 and others\n")
print("Not in 2:\n", [name for name in subjects1 if name not in subjects2])
print("Not in 3:\n", [name for name in subjects1 if name not in subjects3])
print("Not in 4:\n", [name for name in subjects1 if name not in subjects4])
# From subjects2
print("\nDifferences between subjects2 and others\n")
print("Not in 1:\n", [name for name in subjects2 if name not in subjects1])
print("Not in 3:\n", [name for name in subjects2 if name not in subjects3])
print("Not in 4:\n", [name for name in subjects2 if name not in subjects4])
# From subjects3
print("\nDifferences between subjects3 and others\n")
print("Not in 1:\n", [name for name in subjects3 if name not in subjects1])
print("Not in 2:\n", [name for name in subjects3 if name not in subjects2])
print("Not in 4:\n", [name for name in subjects3 if name not in subjects4])
# From subjects4
print("\nDifferences between subjects4 and others\n")
print("Not in 1:\n", [name for name in subjects4 if name not in subjects1])
print("Not in 2:\n", [name for name in subjects4 if name not in subjects2])
print("Not in 3:\n", [name for name in subjects4 if name not in subjects3])
###Output
235
232
135
235
Differences between subjects1 and others
Not in 2:
['bunkers', 'statistical difference', 'world']
Not in 3:
['afghanistan', 'algeria', 'andorra', 'angola', 'anguilla', 'antigua and barbuda', 'aruba', 'bahamas', 'barbados', 'belize', 'bermuda', 'bhutan', 'bonaire, saint eustatius and saba', 'bosnia and herzegovina', 'british virgin islands', 'burundi', 'cape verde', 'cayman islands', 'central african republic', 'chad', 'comoros', 'congo', 'cook islands', 'cuba', 'curaçao', 'democratic republic of the congo', 'djibouti', 'dominica', 'equatorial guinea', 'eritrea', 'faeroe islands', 'falkland islands (malvinas)', 'fiji', 'french guiana', 'french polynesia', 'gabon', 'gambia', 'gibraltar', 'greenland', 'grenada', 'guadeloupe', 'guinea-bissau', 'guyana', 'haiti', 'iceland', 'iraq', 'kiribati', 'lebanon', 'lesotho', 'liberia', 'libya', 'liechtenstein', 'macao', 'macedonia (republic of)', 'maldives', 'mali', 'marshall islands', 'martinique', 'mauritania', 'micronesia (federated states of)', 'moldova', 'montenegro', 'montserrat', 'myanmar', 'nauru', 'new caledonia', 'niger', 'niue', 'north korea', 'occupied palestinian territory', 'palau', 'papua new guinea', 'réunion', 'saint helena', 'saint kitts and nevis', 'saint lucia', 'saint pierre and miquelon', 'saint vincent and the grenadines', 'samoa', 'sao tome and principe', 'serbia', 'seychelles', 'sierra leone', 'sint maarten (dutch part)', 'solomon islands', 'somalia', 'south sudan', 'sudan', 'suriname', 'swaziland', 'syria', 'tajikistan', 'timor-leste', 'tonga', 'turkmenistan', 'turks and caicos islands', 'uzbekistan', 'vanuatu', 'wallis and futuna islands', 'yemen']
Not in 4:
['antigua and barbuda', 'bolivia', 'bonaire, saint eustatius and saba', 'bosnia and herzegovina', 'brunei darussalam', 'cameroon', 'china', 'curaçao', "côte d'ivoire", 'democratic republic of the congo', 'france', 'guinea-bissau', 'hong kong', 'iran', 'italy', 'laos', 'libya', 'macao', 'macedonia (republic of)', 'micronesia (federated states of)', 'moldova', 'myanmar', 'north korea', 'réunion', 'saint kitts and nevis', 'saint pierre and miquelon', 'saint vincent and the grenadines', 'sao tome and principe', 'sint maarten (dutch part)', 'south korea', 'south sudan', 'sudan', 'syria', 'tanzania', 'timor-leste', 'usa']
Differences between subjects2 and others
Not in 1:
[]
Not in 3:
['afghanistan', 'algeria', 'andorra', 'angola', 'anguilla', 'antigua and barbuda', 'aruba', 'bahamas', 'barbados', 'belize', 'bermuda', 'bhutan', 'bonaire, saint eustatius and saba', 'bosnia and herzegovina', 'british virgin islands', 'burundi', 'cape verde', 'cayman islands', 'central african republic', 'chad', 'comoros', 'congo', 'cook islands', 'cuba', 'curaçao', 'democratic republic of the congo', 'djibouti', 'dominica', 'equatorial guinea', 'eritrea', 'faeroe islands', 'falkland islands (malvinas)', 'fiji', 'french guiana', 'french polynesia', 'gabon', 'gambia', 'gibraltar', 'greenland', 'grenada', 'guadeloupe', 'guinea-bissau', 'guyana', 'haiti', 'iceland', 'iraq', 'kiribati', 'lebanon', 'lesotho', 'liberia', 'libya', 'liechtenstein', 'macao', 'macedonia (republic of)', 'maldives', 'mali', 'marshall islands', 'martinique', 'mauritania', 'micronesia (federated states of)', 'moldova', 'montenegro', 'montserrat', 'myanmar', 'nauru', 'new caledonia', 'niger', 'niue', 'north korea', 'occupied palestinian territory', 'palau', 'papua new guinea', 'réunion', 'saint helena', 'saint kitts and nevis', 'saint lucia', 'saint pierre and miquelon', 'saint vincent and the grenadines', 'samoa', 'sao tome and principe', 'serbia', 'seychelles', 'sierra leone', 'sint maarten (dutch part)', 'solomon islands', 'somalia', 'south sudan', 'sudan', 'suriname', 'swaziland', 'syria', 'tajikistan', 'timor-leste', 'tonga', 'turkmenistan', 'turks and caicos islands', 'uzbekistan', 'vanuatu', 'wallis and futuna islands', 'yemen']
Not in 4:
['antigua and barbuda', 'bolivia', 'bonaire, saint eustatius and saba', 'bosnia and herzegovina', 'brunei darussalam', 'cameroon', 'china', 'curaçao', "côte d'ivoire", 'democratic republic of the congo', 'france', 'guinea-bissau', 'hong kong', 'iran', 'italy', 'laos', 'libya', 'macao', 'macedonia (republic of)', 'micronesia (federated states of)', 'moldova', 'myanmar', 'north korea', 'réunion', 'saint kitts and nevis', 'saint pierre and miquelon', 'saint vincent and the grenadines', 'sao tome and principe', 'sint maarten (dutch part)', 'south korea', 'south sudan', 'sudan', 'syria', 'tanzania', 'timor-leste', 'usa']
Differences between subjects3 and others
Not in 1:
[]
Not in 2:
['bunkers', 'statistical difference', 'world']
Not in 4:
['bolivia', 'brunei darussalam', 'cameroon', 'china', "côte d'ivoire", 'france', 'hong kong', 'iran', 'italy', 'laos', 'south korea', 'tanzania', 'usa']
Differences between subjects4 and others
Not in 1:
['antigua & barbuda', 'bonaire, saint eustatius, and saba', 'bosnia & herzegovina', 'brunei (darussalam)', 'china (mainland)', 'cote d ivoire', 'curacao', 'democratic people s republic of korea', 'democratic republic of the congo (formerly zaire)', 'federated states of micronesia', 'france (including monaco)', 'guinea bissau', 'hong kong special adminstrative region of china', 'islamic republic of iran', 'italy (including san marino)', 'lao people s democratic republic', 'libyan arab jamahiriyah', 'macau special adminstrative region of china', 'macedonia', 'myanmar (formerly burma)', 'plurinational state of bolivia', 'republic of cameroon', 'republic of korea', 'republic of moldova', 'republic of south sudan', 'republic of sudan', 'reunion', 'saint martin (dutch portion)', 'sao tome & principe', 'st. kitts-nevis', 'st. pierre & miquelon', 'st. vincent & the grenadines', 'syrian arab republic', 'timor-leste (formerly east timor)', 'united republic of tanzania', 'united states of america']
Not in 2:
['antigua & barbuda', 'bonaire, saint eustatius, and saba', 'bosnia & herzegovina', 'brunei (darussalam)', 'bunkers', 'china (mainland)', 'cote d ivoire', 'curacao', 'democratic people s republic of korea', 'democratic republic of the congo (formerly zaire)', 'federated states of micronesia', 'france (including monaco)', 'guinea bissau', 'hong kong special adminstrative region of china', 'islamic republic of iran', 'italy (including san marino)', 'lao people s democratic republic', 'libyan arab jamahiriyah', 'macau special adminstrative region of china', 'macedonia', 'myanmar (formerly burma)', 'plurinational state of bolivia', 'republic of cameroon', 'republic of korea', 'republic of moldova', 'republic of south sudan', 'republic of sudan', 'reunion', 'saint martin (dutch portion)', 'sao tome & principe', 'st. kitts-nevis', 'st. pierre & miquelon', 'st. vincent & the grenadines', 'syrian arab republic', 'statistical difference', 'timor-leste (formerly east timor)', 'united republic of tanzania', 'united states of america', 'world']
Not in 3:
['afghanistan', 'algeria', 'andorra', 'angola', 'anguilla', 'antigua & barbuda', 'aruba', 'bahamas', 'barbados', 'belize', 'bermuda', 'bhutan', 'bonaire, saint eustatius, and saba', 'bosnia & herzegovina', 'british virgin islands', 'brunei (darussalam)', 'burundi', 'cape verde', 'cayman islands', 'central african republic', 'chad', 'china (mainland)', 'comoros', 'congo', 'cook islands', 'cote d ivoire', 'cuba', 'curacao', 'democratic people s republic of korea', 'democratic republic of the congo (formerly zaire)', 'djibouti', 'dominica', 'equatorial guinea', 'eritrea', 'faeroe islands', 'falkland islands (malvinas)', 'federated states of micronesia', 'fiji', 'france (including monaco)', 'french guiana', 'french polynesia', 'gabon', 'gambia', 'gibraltar', 'greenland', 'grenada', 'guadeloupe', 'guinea bissau', 'guyana', 'haiti', 'hong kong special adminstrative region of china', 'iceland', 'iraq', 'islamic republic of iran', 'italy (including san marino)', 'kiribati', 'lao people s democratic republic', 'lebanon', 'lesotho', 'liberia', 'libyan arab jamahiriyah', 'liechtenstein', 'macau special adminstrative region of china', 'macedonia', 'maldives', 'mali', 'marshall islands', 'martinique', 'mauritania', 'montenegro', 'montserrat', 'myanmar (formerly burma)', 'nauru', 'new caledonia', 'niger', 'niue', 'occupied palestinian territory', 'palau', 'papua new guinea', 'plurinational state of bolivia', 'republic of cameroon', 'republic of korea', 'republic of moldova', 'republic of south sudan', 'republic of sudan', 'reunion', 'saint helena', 'saint lucia', 'saint martin (dutch portion)', 'samoa', 'sao tome & principe', 'serbia', 'seychelles', 'sierra leone', 'solomon islands', 'somalia', 'st. kitts-nevis', 'st. pierre & miquelon', 'st. vincent & the grenadines', 'suriname', 'swaziland', 'syrian arab republic', 'tajikistan', 'timor-leste (formerly east timor)', 'tonga', 'turkmenistan', 'turks and caicos islands', 'united republic of tanzania', 'united states of america', 'uzbekistan', 'vanuatu', 'wallis and futuna islands', 'yemen']
###Markdown
Sanity checks
###Code
# This shows the total nmber of remaining countries in the data
list1 = consumption_gcb_pct_change.index
list2 = territory_gcb_pct_change.index
print(sum([(True if (item in list2) else False) for item in list1]))
print(sum([(True if (item in list2) else False) for item in list1]))
### Evaluate whether Null items are the same between Consumption and Territory emissions data
print(sum(pd.isnull(consumption_gcb_pct_change["wb_name"])))
list1 = consumption_gcb_pct_change["wb_name"].loc[pd.isnull(consumption_gcb_pct_change["wb_name"])].index
print(sum(pd.isnull(territory_gcb_pct_change["wb_name"])))
list2 = territory_gcb_pct_change["wb_name"].loc[pd.isnull(territory_gcb_pct_change["wb_name"])].index
# This includes Bunkers and Statistical Difference - reason for extra indices
[(True if (item in list1) else False) for item in list2]
###Output
50
52
###Markdown
Create ratios of absolute & pct_change territory / consumption emissions
###Code
# Convert raw data to percent change by year from 2000 forward
consumption_emissions_abs = consumption_emissions_abs_raw.loc[2000:].transpose()
territorial_emissions_abs = territorial_emissions_abs_raw.loc[2000:].transpose()
terr_over_cons_abs= territorial_emissions_abs.div(consumption_emissions_abs)
# Name for Congo didn't match in the CDIAC data and crosswalk file
terr_over_cons_abs.index = map(replace_congo, terr_over_cons_abs.index)
# Add the wb_name to each dataframe
terr_over_cons_abs["wb_name"] = terr_over_cons_abs.apply(lambda row: fetch_name(row.name), axis=1)
terr_over_cons_abs = terr_over_cons_abs .loc[pd.notnull(terr_over_cons_abs["wb_name"])]
# Only keep the CDIAC data where there is a matching world bank country
terr_over_cons_abs["ISO"] = terr_over_cons_abs.apply(lambda row: add_iso(row.name), axis=1)
# Add in wb_names and ISO codes
terr_over_cons_abs.to_csv(root_folder + "/territorial_emissions_divided_by_consumption_emissions.csv")
terr_over_cons_abs.head(10)
territory_gcb = territorial_emissions_abs_raw.loc[1999:2015].transpose().pct_change(axis=1).loc[:,2000:]
consumption_gcb = consumption_emissions_abs_raw.loc[1999:2015].transpose().pct_change(axis=1).loc[:,2000:]
terr_over_cons_per_change= territory_gcb.div(consumption_gcb)
# Name for Congo didn't match in the CDIAC data and crosswalk file
terr_over_cons_per_change.index = map(replace_congo, terr_over_cons_per_change.index)
# Add the wb_name to each dataframe
terr_over_cons_per_change["wb_name"] = terr_over_cons_per_change.apply(lambda row: fetch_name(row.name), axis=1)
terr_over_cons_per_change = terr_over_cons_per_change.loc[pd.notnull(terr_over_cons_per_change["wb_name"])]
# Only keep the CDIAC data where there is a matching world bank country
terr_over_cons_per_change["ISO"] = terr_over_cons_per_change.apply(lambda row: add_iso(row.name), axis=1)
terr_over_cons_per_change.to_csv(root_folder + "/territorial_emissions_divided_by_consumption_emissions_per_change.csv")
terr_over_cons_per_change.head(10)
###Output
_____no_output_____
###Markdown
Identifying indicators from Nate's file
###Code
# Grab indicators from Nate's file
indicators = pd.read_csv("/Users/nathansuberi/Desktop/WRI_Programming/compiled independent variable absolute data 1999-2015.csv")
data_names_and_codes = dict(set(zip(indicators["Series Code"], indicators["Series Name"])))
# Remove the nan entry
data_names_and_codes.pop(np.nan)
###Output
_____no_output_____ |
course_notes/AMATH582-Lec10-1.ipynb | ###Markdown
Demo code for lecture 10. First, we generate some random functions to demonstrate the effectiveness of PCA. I will be using a Gaussian process which we will not see until later in the course. So don't worry about the details of how these random functions are being generated.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import distance_matrix
# function from Lec 4
def f(x):
val = (3*np.sin(2*x) + 0.5*np.tanh(0.5*(x-3)) + 0.2*np.exp(-(x- 4)**2)
+ 1.5*np.sin(5*x) + 4*np.cos(3*(x-6)**2))/10 + (x/20)**3
return val
# covariance function
def k(t, l):
val = 0.5*np.exp( - ( (t**2)/(2*l**2) ) )
return val
L = 12
N_grid = 2**11
grid = np.linspace(0, 12, N_grid)
f_vals = np.asmatrix(f(grid))
# construct covariance matrix
l = L/10
dist = distance_matrix(np.transpose(np.asmatrix(grid)), np.transpose(np.asmatrix(grid)))
C = k(dist, l)
nugget = 1e-4*np.identity(C.shape[0])
CC = np.linalg.cholesky( C + nugget)
N_samples = 40 # number of random functions we want
data = []
for i in range(N_samples):
sample = np.dot(CC,np.random.randn(N_grid,1)) + np.reshape(f_vals, (N_grid, 1))
data.append(sample)
plt.plot(grid, sample)
data = np.transpose(np.squeeze(np.asarray(data)))
plt.xlabel('x')
plt.title('A set of random functions')
plt.show()
print(data.shape)
###Output
_____no_output_____
###Markdown
While these functions are random there is clearly a lot of structure among them. For example they are smooth in the middle and more oscillatory on the sides. We will now use PCA to reveal these features.
###Code
centered_data = data - np.mean(data, axis=1)[:, None]
dU, ds, dVt = np.linalg.svd(centered_data)
print(dU.shape, ds.shape, dVt.shape )
###Output
(2048, 2048) (40,) (40, 40)
###Markdown
First, we plot the singular values to see the effective dimension of the data set.
###Code
plt.plot(np.log(ds)[:30])
plt.xlabel('index $j$')
plt.ylabel('$\log(\sigma_j)$')
###Output
_____no_output_____
###Markdown
So the effective dimension appears to be 20. Let us plot the corresponding principal components 5 modes at a time for better visualization.
###Code
fig, ax = plt.subplots(1,4, figsize=(40,10))
for k in range(4):
for j in range(5):
ax[k].plot(dU[:, k*5 + j])
ax[k].set_xlabel('x')
ax[k].set_title('PC '+str(k*5)+' to '+str((k+1)*5-1))
plt.show
ds_approx = np.copy(ds)
ds_approx[15:None] = 0
X_approx = np.mean(data, axis=1)[:, None] + np.dot( dU[:, :40], np.dot(np.diag(ds_approx), dVt ) )
# lets compare samples side by side
fig, ax = plt.subplots(1,2, figsize=(20,10))
for i in range(5):
ax[1].plot(grid, X_approx[:,i])
ax[1].set_xlabel('x')
ax[1].set_title('Approximation')
ax[0].plot(grid, data[:,i])
ax[0].set_xlabel('x')
ax[0].set_title('Original')
plt.show()
###Output
_____no_output_____ |
db2_with_json/notebook/JSON Db2 House Value.ipynb | ###Markdown
Creating JSON Columns in House Value Dataset We will be showing create a table in Db2 that utilizes JSON columns by using the JSON2BSON and BSON2JSON system calls. Import Python Modules In some cases if your `import ibm_db` and `import ibm_db_dbi` does not work. Please refer to github issue [here](https://github.com/ibmdb/python-ibmdb/issues/276). IMPORTANT NOTE: If you have a Mac, you will need to refer to the github link above and run the `install_name_tool`. Please make sure that you edit that line properly and make sure the paths are correct, otherwise it will affect other python dependencies!!!
###Code
!easy_install ibm_db
import ibm_db
import ibm_db_dbi
import pandas as pd
###Output
_____no_output_____
###Markdown
Connect to DB2 InstanceHere we will be connecting to our Db2 instance with the following service credentials. Please also enter the schema name you would like your table to be under.
###Code
# DON'T TOUCH THE `DRIVER` ATTRIBUTE!
dsn = "DRIVER={{IBM DB2 ODBC DRIVER}};" + \
"DATABASE={DATABASE_NAME};" + \
"HOSTNAME={HOST_NAME};" + \
"PORT=50000;" + \
"PROTOCOL=TCPIP;" + \
"UID={UID};" + \
"PWD={PWD};"
SCHEMA_NAME = 'KXQ49540'
hdbc = ibm_db.connect(dsn, "", "")
hdbi = ibm_db_dbi.Connection(hdbc)
print('Connection To DB2 Instance Has Been Created!')
###Output
Connection To DB2 Instance Has Been Created!
###Markdown
Creat Table With JSON ColumnsHere we will be creating a table for our House Value dataset. Notice how `BLDGTYPE` and `HOUSESTYLE` have type `BLOB`. This indicates that those columns will be BSON columns.
###Code
sql = 'CREATE TABLE '+SCHEMA_NAME+'.HOME_SALES ( ' + \
'ID SMALLINT, ' + \
'LOTAREA INTEGER, ' + \
'BLDGTYPE BLOB,' + \
'HOUSESTYLE BLOB, ' + \
'OVERALLCOND INTEGER, ' + \
'YEARBUILT INTEGER, ' + \
'ROOFSTYLE VARCHAR(50), ' + \
'EXTERCOND VARCHAR(50), ' + \
'FOUNDATION VARCHAR(50), ' + \
'BSMTCOND VARCHAR(50), ' + \
'HEATING VARCHAR(50), ' + \
'HEATINGQC VARCHAR(50),' + \
'CENTRALAIR VARCHAR(50), ' + \
'ELECTRICAL VARCHAR(50), ' + \
'FULLBATH INTEGER, ' + \
'HALFBATH INTEGER, ' + \
'BEDROOMABVGR INTEGER, ' + \
'KITCHENABVGR INTEGER, ' + \
'KITCHENQUAL VARCHAR(50), ' + \
'TOTRMSABVGRD INTEGER, ' + \
'FIREPLACES INTEGER, ' + \
'FIREPLACEQU VARCHAR(50), ' + \
'GARAGETYPE VARCHAR(50), ' + \
'GARAGEFINISH VARCHAR(50), ' + \
'GARAGECARS INTEGER, ' + \
'GARAGECOND VARCHAR(50), ' + \
'POOLAREA INTEGER, ' + \
'POOLQC VARCHAR(50), ' + \
'FENCE VARCHAR(50), ' + \
'MOSOLD INTEGER, ' + \
'YRSOLD INTEGER, ' + \
'SALEPRICE INTEGER )'
###Output
_____no_output_____
###Markdown
Time to execute our SQL statement and create a table. You may go to your DB2 instance and verify that the table has been created.
###Code
try:
stmt = ibm_db.exec_immediate(hdbc, sql)
print('Table HOME_SALES Has Been Created Under ' + str(SCHEMA_NAME) + ' Schema!')
except:
print('ERROR: Table Already Exist Or Some Other Error')
###Output
ERROR: Table Already Exist Or Some Other Error
###Markdown
Prepare and Load File into DatabaseAs mentioned before, we will be using the home_sales dataset. We will be reading each row at a time since we need to convert two columns into a JSON before inserting into the db.
###Code
data = pd.read_csv("../data/home-sales-training-data.csv")
data.head()
sql_p1 = 'INSERT INTO '+SCHEMA_NAME+'.HOME_SALES (' + \
'ID, ' + \
'LOTAREA, ' + \
'BLDGTYPE,' + \
'HOUSESTYLE, ' + \
'OVERALLCOND, ' + \
'YEARBUILT, ' + \
'ROOFSTYLE, ' + \
'EXTERCOND, ' + \
'FOUNDATION, ' + \
'BSMTCOND, ' + \
'HEATING, ' + \
'HEATINGQC,' + \
'CENTRALAIR, ' + \
'ELECTRICAL, ' + \
'FULLBATH, ' + \
'HALFBATH, ' + \
'BEDROOMABVGR, ' + \
'KITCHENABVGR, ' + \
'KITCHENQUAL, ' + \
'TOTRMSABVGRD, ' + \
'FIREPLACES, ' + \
'FIREPLACEQU, ' + \
'GARAGETYPE, ' + \
'GARAGEFINISH, ' + \
'GARAGECARS , ' + \
'GARAGECOND, ' + \
'POOLAREA , ' + \
'POOLQC, ' + \
'FENCE, ' + \
'MOSOLD, ' + \
'YRSOLD, ' + \
'SALEPRICE )'
###Output
_____no_output_____
###Markdown
As you can see, we are going through each row of the pandas dataframe and extracting each value. Notice how we are wrapper `BLDGTYPE` and `HOUSESTYLE` in a JSON object. We then put the JSON object in a system call function - `JSON2BSON`. This function converts the JSON object into a BSON, which is a binary representation of the JSON object. So when you view your data in the table, these two columns will be representated in binaray form.
###Code
for index, row in data.iterrows():
sql_p2 = ' VALUES ('+str(row['ID'])+' , ' + \
''+str(row['LOTAREA'])+' , ' + \
'SYSTOOLS.JSON2BSON(\' { "BLDGTYPE": "'+str(row['BLDGTYPE'])+'"} \') , ' + \
'SYSTOOLS.JSON2BSON(\' { "HOUSESTYLE": "'+str(row['HOUSESTYLE'])+'"} \') , ' + \
''+str(row['OVERALLCOND'])+' , ' + \
''+str(row['YEARBUILT'])+' , ' + \
'\''+str(row['ROOFSTYLE'])+'\' , ' + \
'\''+str(row['EXTERCOND'])+'\' , ' + \
'\''+str(row['FOUNDATION'])+'\' , ' + \
'\''+str(row['BSMTCOND'])+'\' , ' + \
'\''+str(row['HEATING'])+'\' , ' + \
'\''+str(row['HEATINGQC'])+'\' , ' + \
'\''+str(row['CENTRALAIR'])+'\' , ' + \
'\''+str(row['ELECTRICAL'])+'\' , ' + \
''+str(row['FULLBATH'])+' , ' + \
''+str(row['HALFBATH'])+' , ' + \
''+str(row['BEDROOMABVGR'])+' , ' + \
''+str(row['KITCHENABVGR'])+' , ' + \
'\''+str(row['KITCHENQUAL'])+'\' , ' + \
''+str(row['TOTRMSABVGRD'])+' , ' + \
''+str(row['FIREPLACES'])+' , ' + \
'\''+str(row['FIREPLACEQU'])+'\' , ' + \
'\''+str(row['GARAGETYPE'])+'\' , ' + \
'\''+str(row['GARAGEFINISH'])+'\' , ' + \
''+str(row['GARAGECARS'])+' , ' + \
'\''+str(row['GARAGECOND'])+'\' , ' + \
''+str(row['POOLAREA'])+' , ' + \
'\''+str(row['POOLQC'])+'\' , ' + \
'\''+str(row['FENCE'])+'\' , ' + \
''+str(row['MOSOLD'])+' , ' + \
''+str(row['YRSOLD'])+' , ' + \
''+str(row['SALEPRICE'])+' ' + \
')'
sql_final = sql_p1 + sql_p2
stmt = ibm_db.exec_immediate(hdbc, sql_final)
###Output
_____no_output_____
###Markdown
Viewing and Pulling Data From DatabaseNow that we have inserted all our data into our database, we want to be able to read and extract information. For that we will use the `SELECT` statement. Notice that we will be using the system call `BSON2JSON`. Since two of your columns are JSON objects and stored in the database as binary objects, we need to convert them back to JSON so that we can read and use that data effectively.
###Code
sql = 'SELECT LOTAREA, MOSOLD, YRSOLD, SALEPRICE, SYSTOOLS.BSON2JSON(BLDGTYPE) AS BLDGTYPE , SYSTOOLS.BSON2JSON(HOUSESTYLE) AS HOUSESTYLE FROM '+SCHEMA_NAME+'.HOME_SALES;'
df = pd.read_sql(sql,hdbi)
df.head()
sql = 'SELECT LOTAREA, MOSOLD, YRSOLD, SALEPRICE, JSON_VAL(BLDGTYPE,\'BLDGTYPE\',\'s:36\') AS BLDGTYPE , JSON_VAL(HOUSESTYLE,\'HOUSESTYLE\',\'s:36\') AS HOUSESTYLE FROM '+SCHEMA_NAME+'.HOME_SALES;'
df = pd.read_sql(sql,hdbi)
df.head()
###Output
_____no_output_____ |
guide/03-the-gis/using-the-gis.ipynb | ###Markdown
Using the GIS The `GIS` object in the `gis` module is the most important object when working with the ArcGIS API for Python. The GIS object represents the GIS you are working with, be it ArcGIS Online or an instance of ArcGIS Enterprise. You use the GIS object to consume and publish GIS content and administrators may use it to manage GIS users, groups and datastores. This object becomes your entry point in your Python script when using the API. To use the GIS object, import GIS from the `arcgis.gis` module:
###Code
from arcgis.gis import GIS
###Output
_____no_output_____
###Markdown
To create the GIS object, we pass in the url and our login credentials as shown below:
###Code
gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")
###Output
_____no_output_____
###Markdown
If connecting to an ArcGIS Enterprise in your premises, your URL becomes `http://machinename.domain.com/webadapter`. Your GIS can support a [number of authentication schemes](http://server.arcgis.com/en/portal/latest/administer/windows/about-configuring-portal-authentication.htm), refer to [this section of the guide](https://developers.arcgis.com/python/guide/working-with-different-authentication-schemes/) to know how to **authenticate your scripts and notebooks** for various such schemes. Below, we're connecting to ArcGIS Online (the default GIS used when the url is not provided) as an anonymous user:
###Code
gis = GIS()
###Output
_____no_output_____
###Markdown
Adding a '?' mark after an object and querying it brings up help for that object in the notebook:
###Code
gis?
###Output
_____no_output_____
###Markdown
The notebook provides intellisense and code-completion. Typing a dot after an object and hitting tab brings up a drop-down with its properties and methods: Helper objectsThe `GIS` object provides helper objects to manage the GIS resources, i.e. the users, groups, content and datastores. These helper utilities are in the form of helper objects named: `users`, `groups`, `content` and `datastore` respectively. The helper utility for managing user roles named `roles` is available as a property on the helper object `users`.Each such helper object has similar patterns of usage: there are methods to `get()`, `search()` and `create()` the respective resources.The prescribed programming pattern is to not create the GIS resources (user, group, item, role, datastore) directly using their constructor, but to access them through their corresponding helper objects described above. Thus, to access a user, you would use the `users` property of your `gis` object which gives you an instance of `UserManager` class. You would then call the `get()` method of the `UserManager` object and pass the user name of the user you are interested in.
###Code
user = gis.users.get('john.smith')
###Output
_____no_output_____
###Markdown
Rich IDE experience with Jupyter notebooksThe ArcGIS API for Python is integrated with Jupyter Notebook to make it easy to visualize and interact with GIS resources. The `user` object has a rich representation that can be queried like this:
###Code
user
###Output
_____no_output_____
###Markdown
The resources are implemented as Python dictionaries. You can query for the resource properties using the resource['property'] notation:
###Code
user['firstName']
###Output
_____no_output_____
###Markdown
The properties are also available as properties on the resource object, so you can use the dot notation to access them:
###Code
user.lastName
###Output
_____no_output_____
###Markdown
The resources provide methods to `update()`, `delete()` and use the object. The reminder of topics in this module talk in detail about using the various helper objects and resource objects. Embedded maps in Jupyter notebooksThe `GIS` object includes a map widget that can be used to visualize the content of your GIS as well as see the results of your analysis. Let's bring up a map of Palm Springs, CA:
###Code
map1 = gis.map("Palm Springs, CA")
map1
###Output
_____no_output_____
###Markdown
 We can search for content in our GIS. Let's search for Hiking Trails in the Palm Springs region. We do that by calling **`gis.content.search()`** and for each web map or web layers that gets returned, we can display its rich representation within the notebook:
###Code
from IPython.display import display
items = gis.content.search('Palm Springs Trails')
for item in items:
display(item)
###Output
_____no_output_____
###Markdown
We can then add the returned web layers to our map. To add the last layer returned above, we call the `add_layer()` method and pass in the layer for Palm Springs Trail:
###Code
# Let us filter out the item with title 'Trails' that we want to add
item_to_add = [temp_item for temp_item in items if temp_item.title == "Trails"]
map1.add_layer(item_to_add[0])
###Output
_____no_output_____
###Markdown
Using the GIS The `GIS` object in the `gis` module is the most important object when working with the ArcGIS API for Python. The GIS object represents the GIS you are working with, be it ArcGIS Online or an instance of ArcGIS Enterprise. You use the GIS object to consume and publish GIS content and administrators may use it to manage GIS users, groups and datastores. This object becomes your entry point in your Python script when using the API. To use the GIS object, import GIS from the `arcgis.gis` module:
###Code
from arcgis.gis import GIS
###Output
_____no_output_____
###Markdown
To create the GIS object, we pass in the url and our login credentials as shown below:
###Code
gis = GIS('home')
###Output
_____no_output_____
###Markdown
If connecting to an ArcGIS Enterprise in your premises, your URL becomes `http://machinename.domain.com/webadapter`. Your GIS can support a [number of authentication schemes](http://server.arcgis.com/en/portal/latest/administer/windows/about-configuring-portal-authentication.htm), refer to [this section of the guide](https://developers.arcgis.com/python/guide/working-with-different-authentication-schemes/) to know how to **authenticate your scripts and notebooks** for various such schemes. Below, we're connecting to ArcGIS Online (the default GIS used when the url is not provided) as an anonymous user:
###Code
gis = GIS()
###Output
_____no_output_____
###Markdown
Adding a '?' mark after an object and querying it brings up help for that object in the notebook:
###Code
gis?
###Output
_____no_output_____
###Markdown
The notebook provides intellisense and code-completion. Typing a dot after an object and hitting tab brings up a drop-down with its properties and methods: Helper objectsThe `GIS` object provides helper objects to manage the GIS resources, i.e. the users, groups, content and datastores. These helper utilities are in the form of helper objects named: `users`, `groups`, `content` and `datastore` respectively. The helper utility for managing user roles named `roles` is available as a property on the helper object `users`.Each such helper object has similar patterns of usage: there are methods to `get()`, `search()` and `create()` the respective resources.The prescribed programming pattern is to not create the GIS resources (user, group, item, role, datastore) directly using their constructor, but to access them through their corresponding helper objects described above. Thus, to access a user, you would use the `users` property of your `gis` object which gives you an instance of `UserManager` class. You would then call the `get()` method of the `UserManager` object and pass the user name of the user you are interested in.
###Code
user = gis.users.get('john.smith')
###Output
_____no_output_____
###Markdown
Rich IDE experience with Jupyter notebooksThe ArcGIS API for Python is integrated with Jupyter Notebook to make it easy to visualize and interact with GIS resources. The `user` object has a rich representation that can be queried like this:
###Code
user
###Output
_____no_output_____
###Markdown
The resources are implemented as Python dictionaries. You can query for the resource properties using the resource['property'] notation:
###Code
user['firstName']
###Output
_____no_output_____
###Markdown
The properties are also available as properties on the resource object, so you can use the dot notation to access them:
###Code
user.lastName
###Output
_____no_output_____
###Markdown
The resources provide methods to `update()`, `delete()` and use the object. The reminder of topics in this module talk in detail about using the various helper objects and resource objects. Embedded maps in Jupyter notebooksThe `GIS` object includes a map widget that can be used to visualize the content of your GIS as well as see the results of your analysis. Let's bring up a map of Palm Springs, CA:
###Code
map1 = gis.map("Palm Springs, CA")
map1
###Output
_____no_output_____
###Markdown
 We can search for content in our GIS. Let's search for Hiking Trails in the Palm Springs region. We do that by calling **`gis.content.search()`** and for each web map or web layers that gets returned, we can display its rich representation within the notebook:
###Code
from IPython.display import display
items = gis.content.search('Palm Springs Trails')
for item in items:
display(item)
###Output
_____no_output_____
###Markdown
We can then add the returned web layers to our map. To add the last layer returned above, we call the `add_layer()` method and pass in the layer for Palm Springs Trail:
###Code
# Let us filter out the item with title 'Trails' that we want to add
item_to_add = [temp_item for temp_item in items if temp_item.title == "Trails"]
map1.add_layer(item_to_add[0])
###Output
_____no_output_____
###Markdown
Using the GIS The `GIS` object in the `gis` module is the most important object when working with the ArcGIS API for Python. The GIS object represents the GIS you are working with, be it ArcGIS Online or an instance of ArcGIS Enterprise. You use the GIS object to consume and publish GIS content and administrators may use it to manage GIS users, groups and datastores. This object becomes your entry point in your Python script when using the API. To use the GIS object, import GIS from the `arcgis.gis` module:
###Code
from arcgis.gis import GIS
###Output
_____no_output_____
###Markdown
To create the GIS object, we pass in the url and our login credentials as shown below:
###Code
gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")
###Output
_____no_output_____
###Markdown
If connecting to an ArcGIS Enterprise in your premises, your URL becomes `http://machinename.domain.com/webadapter`. Your GIS can support a [number of authentication schemes](http://server.arcgis.com/en/portal/latest/administer/windows/about-configuring-portal-authentication.htm), refer to [this section of the guide](/python/guide/working-with-different-authentication-schemes/) to know how to **authenticate your scripts and notebooks** for various such schemes. Below, we're connecting to ArcGIS Online (the default GIS used when the url is not provided) as an anonymous user:
###Code
gis = GIS()
###Output
_____no_output_____
###Markdown
Adding a '?' mark after an object and querying it brings up help for that object in the notebook:
###Code
gis?
###Output
_____no_output_____
###Markdown
The notebook provides intellisense and code-completion. Typing a dot after an object and hitting tab brings up a drop-down with its properties and methods: Helper objectsThe `GIS` object provides helper objects to manage the GIS resources, i.e. the users, groups, content and datastores. These helper utilities are in the form of helper objects named: `users`, `groups`, `content` and `datastore` respectively. The helper utility for managing user roles named `roles` is available as a property on the helper object `users`.Each such helper object has similar patterns of usage: there are methods to `get()`, `search()` and `create()` the respective resources.The prescribed programming pattern is to not create the GIS resources (user, group, item, role, datastore) directly using their constructor, but to access them through their corresponding helper objects described above. Thus, to access a user, you would use the `users` property of your `gis` object which gives you an instance of `UserManager` class. You would then call the `get()` method of the `UserManager` object and pass the user name of the user you are interested in.
###Code
user = gis.users.get('john.smith')
###Output
_____no_output_____
###Markdown
Rich IDE experience with Jupyter notebooksThe ArcGIS API for Python is integrated with Jupyter Notebook to make it easy to visualize and interact with GIS resources. The `user` object has a rich representation that can be queried like this:
###Code
user
###Output
_____no_output_____
###Markdown
The resources are implemented as Python dictionaries. You can query for the resource properties using the resource['property'] notation:
###Code
user['firstName']
###Output
_____no_output_____
###Markdown
The properties are also available as properties on the resource object, so you can use the dot notation to access them:
###Code
user.lastName
###Output
_____no_output_____
###Markdown
The resources provide methods to `update()`, `delete()` and use the object. The reminder of topics in this module talk in detail about using the various helper objects and resource objects. Embedded maps in Jupyter notebooksThe `GIS` object includes a map widget that can be used to visualize the content of your GIS as well as see the results of your analysis. Let's bring up a map of Palm Springs, CA:
###Code
map1 = gis.map("Palm Springs, CA")
map1
###Output
_____no_output_____
###Markdown
 We can search for content in our GIS. Let's search for Hiking Trails in the Palm Springs region. We do that by calling **`gis.content.search()`** and for each web map or web layers that gets returned, we can display its rich representation within the notebook:
###Code
from IPython.display import display
items = gis.content.search('Palm Springs Trails')
for item in items:
display(item)
###Output
_____no_output_____
###Markdown
We can then add the returned web layers to our map. To add the last layer returned above, we call the `add_layer()` method and pass in the layer for Palm Springs Trail:
###Code
# Let us filter out the item with title 'Trails' that we want to add
item_to_add = [temp_item for temp_item in items if temp_item.title == "Trails"]
map1.add_layer(item_to_add[0])
###Output
_____no_output_____
###Markdown
Using the GIS The `GIS` object in the `gis` module is the most important object when working with the ArcGIS API for Python. The GIS object represents the GIS you are working with, be it ArcGIS Online or an instance of ArcGIS Enterprise. You use the GIS object to consume and publish GIS content and administrators may use it to manage GIS users, groups and datastores. This object becomes your entry point in your Python script when using the API. To use the GIS object, import GIS from the `arcgis.gis` module:
###Code
from arcgis.gis import GIS
###Output
_____no_output_____
###Markdown
To create the GIS object, we pass in the url and our login credentials as shown below:
###Code
gis = GIS('home')
###Output
_____no_output_____
###Markdown
If connecting to an ArcGIS Enterprise in your premises, your URL becomes `http://machinename.domain.com/webadapter`. Your GIS can support a [number of authentication schemes](http://server.arcgis.com/en/portal/latest/administer/windows/about-configuring-portal-authentication.htm), refer to [this section of the guide](https://developers.arcgis.com/python/guide/working-with-different-authentication-schemes/) to know how to **authenticate your scripts and notebooks** for various such schemes. Below, we're connecting to ArcGIS Online (the default GIS used when the url is not provided) as an anonymous user:
###Code
gis = GIS()
###Output
_____no_output_____
###Markdown
Adding a '?' mark after an object and querying it brings up help for that object in the notebook:
###Code
gis?
###Output
_____no_output_____
###Markdown
The notebook provides intellisense and code-completion. Typing a dot after an object and hitting tab brings up a drop-down with its properties and methods: Helper objectsThe `GIS` object provides helper objects to manage the GIS resources, i.e. the users, groups, content and datastores. These helper utilities are in the form of helper objects named: `users`, `groups`, `content` and `datastore` respectively. The helper utility for managing user roles named `roles` is available as a property on the helper object `users`.Each such helper object has similar patterns of usage: there are methods to `get()`, `search()` and `create()` the respective resources.The prescribed programming pattern is to not create the GIS resources (user, group, item, role, datastore) directly using their constructor, but to access them through their corresponding helper objects described above. Thus, to access a user, you would use the `users` property of your `gis` object which gives you an instance of `UserManager` class. You would then call the `get()` method of the `UserManager` object and pass the user name of the user you are interested in.
###Code
user = gis.users.get('john.smith')
###Output
_____no_output_____
###Markdown
Rich IDE experience with Jupyter notebooksThe ArcGIS API for Python is integrated with Jupyter Notebook to make it easy to visualize and interact with GIS resources. The `user` object has a rich representation that can be queried like this:
###Code
user
###Output
_____no_output_____
###Markdown
The resources are implemented as Python dictionaries. You can query for the resource properties using the resource['property'] notation:
###Code
user['firstName']
###Output
_____no_output_____
###Markdown
The properties are also available as properties on the resource object, so you can use the dot notation to access them:
###Code
user.lastName
###Output
_____no_output_____
###Markdown
The resources provide methods to `update()`, `delete()` and use the object. The reminder of topics in this module talk in detail about using the various helper objects and resource objects. Embedded maps in Jupyter notebooksThe `GIS` object includes a map widget that can be used to visualize the content of your GIS as well as see the results of your analysis. Let's bring up a map of Palm Springs, CA:
###Code
map1 = gis.map("Palm Springs, CA")
map1
###Output
_____no_output_____
###Markdown
We can search for content in our GIS. Let's search for Hiking Trails in the Palm Springs region. We do that by calling **`gis.content.search()`** and for each web map or web layers that gets returned, we can display its rich representation within the notebook:
###Code
from IPython.display import display
items = gis.content.search('Palm Springs Trails', item_type='feature layer')
for item in items:
display(item)
###Output
_____no_output_____
###Markdown
We can then add the returned web layers to our map. To add the last layer returned above, we call the `add_layer()` method and pass in the layer for Palm Springs Trail:
###Code
# Let us filter out the item with title 'Trails' that we want to add
item_to_add = [temp_item for temp_item in items if 'Trail' in temp_item.title]
item_to_add
map1.add_layer(item_to_add[0])
map1.zoom_to_layer(item_to_add[0].layers[0])
###Output
_____no_output_____ |
Intro_to_Pytorch.ipynb | ###Markdown
Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
###Code
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
###Output
_____no_output_____
###Markdown
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
###Code
## Calculate the output of this network using the weights and bias tensors
y = activation(torch.sum(features * weights.reshape(5, 1) + bias))
y
###Output
_____no_output_____
###Markdown
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication.
###Code
## Calculate the output of this network using matrix multiplication
y = activation(torch.mm(features, weights.view(5, 1)) + bias)
y
###Output
_____no_output_____
###Markdown
Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$
###Code
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
###Output
_____no_output_____
###Markdown
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
###Code
## Your solution here
h = activation(torch.mm(features, W1) + B1)
y = activation(torch.mm(h, W2) + B2)
y
###Output
_____no_output_____
###Markdown
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units is a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
###Code
import numpy as np
np.set_printoptions(precision=8)
a = np.random.rand(4,3)
a
torch.set_printoptions(precision=8)
b = torch.from_numpy(a)
b
b.numpy()
###Output
_____no_output_____
###Markdown
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
###Code
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
###Output
_____no_output_____ |
tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.DEFAULT])
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config and save the vocabulary to a vocab file. The default TFLite model filename is `model.tflite`, and the default vocab filename is `vocab`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file and vocab file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app by downloading it from the left sidebar on Colab. You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The [TensorFlow Lite Model Maker library](https://www.tensorflow.org/lite/guide/model_maker) simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the trained model to TensorFlow Lite model format with [metadata](https://www.tensorflow.org/lite/convert/metadata) so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is `model.tflite`.In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
###Code
model.export(export_dir='.')
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
Collecting tflite-model-maker
[?25l Downloading https://files.pythonhosted.org/packages/13/bc/4c23b9cb9ef612a1f48bac5543bd531665de5eab8f8231111aac067f8c30/tflite_model_maker-0.1.2-py3-none-any.whl (104kB)
[K |███▏ | 10kB 28.4MB/s eta 0:00:01
[K |██████▎ | 20kB 1.8MB/s eta 0:00:01
[K |█████████▍ | 30kB 2.4MB/s eta 0:00:01
[K |████████████▋ | 40kB 2.7MB/s eta 0:00:01
[K |███████████████▊ | 51kB 2.1MB/s eta 0:00:01
[K |██████████████████▉ | 61kB 2.4MB/s eta 0:00:01
[K |██████████████████████ | 71kB 2.7MB/s eta 0:00:01
[K |█████████████████████████▏ | 81kB 2.9MB/s eta 0:00:01
[K |████████████████████████████▎ | 92kB 3.1MB/s eta 0:00:01
[K |███████████████████████████████▌| 102kB 3.0MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 3.0MB/s
[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.10.0)
Collecting tf-nightly
[?25l Downloading https://files.pythonhosted.org/packages/33/d4/61c47ae889b490b9c5f07f4f61bdc057c158a1a1979c375fa019d647a19e/tf_nightly-2.4.0.dev20200914-cp36-cp36m-manylinux2010_x86_64.whl (390.1MB)
[K |████████████████████████████████| 390.2MB 43kB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (1.18.5)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (7.0.0)
Collecting tf-models-nightly
[?25l Downloading https://files.pythonhosted.org/packages/d3/e9/c4e5a451c268a5a75a27949562364f6086f6bb33b226a065a8beceefa9ba/tf_models_nightly-2.3.0.dev20200914-py2.py3-none-any.whl (993kB)
[K |████████████████████████████████| 1.0MB 57.6MB/s
[?25hCollecting flatbuffers==1.12
Downloading https://files.pythonhosted.org/packages/eb/26/712e578c5f14e26ae3314c39a1bdc4eb2ec2f4ddc89b708cf8e0a0d20423/flatbuffers-1.12-py2.py3-none-any.whl
Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.9.0)
Collecting fire
[?25l Downloading https://files.pythonhosted.org/packages/34/a7/0e22e70778aca01a52b9c899d9c145c6396d7b613719cd63db97ffa13f2f/fire-0.3.1.tar.gz (81kB)
[K |████████████████████████████████| 81kB 11.5MB/s
[?25hCollecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 50.9MB/s
[?25hCollecting tflite-support==0.1.0rc3.dev2
[?25l Downloading https://files.pythonhosted.org/packages/fa/c5/5e9ee3abd5b4ef8294432cd714407f49a66befa864905b66ee8bdc612795/tflite_support-0.1.0rc3.dev2-cp36-cp36m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 50.9MB/s
[?25hRequirement already satisfied: tensorflow-datasets>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (2.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->tflite-model-maker) (1.15.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.3.0)
Collecting tb-nightly<3.0.0a0,>=2.4.0a0
[?25l Downloading https://files.pythonhosted.org/packages/fc/cb/4dfe0d65bffb5e9663261ff664e6f5a2d37672b31dae27a0f14721ac00d3/tb_nightly-2.4.0a20200914-py3-none-any.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 51.4MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.7.4.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.35.1)
Collecting tf-estimator-nightly
[?25l Downloading https://files.pythonhosted.org/packages/bd/9a/3bfb9994eda11e426c809ebdf434e2ac5824a0784d980018bb53fd1620ec/tf_estimator_nightly-2.4.0.dev2020091401-py2.py3-none-any.whl (460kB)
[K |████████████████████████████████| 460kB 36.0MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (2.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.32.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.3.3)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.6.3)
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.4.1)
Collecting pyyaml>=5.1
[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)
[K |████████████████████████████████| 276kB 59.8MB/s
[?25hCollecting tensorflow-model-optimization>=0.4.1
[?25l Downloading https://files.pythonhosted.org/packages/55/38/4fd48ea1bfcb0b6e36d949025200426fe9c3a8bfae029f0973d85518fa5a/tensorflow_model_optimization-0.5.0-py2.py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 51.0MB/s
[?25hRequirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.0.5)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.7)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.29.21)
Collecting opencv-python-headless
[?25l Downloading https://files.pythonhosted.org/packages/b6/2a/496e06fd289c01dc21b11970be1261c87ce1cc22d5340c14b516160822a7/opencv_python_headless-4.4.0.42-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
[K |████████████████████████████████| 36.6MB 83kB/s
[?25hRequirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.5.8)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (2.0.2)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (4.1.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (3.2.2)
Collecting tf-slim>=1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
[K |████████████████████████████████| 358kB 55.9MB/s
[?25hCollecting seqeval
Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (5.4.8)
Collecting py-cpuinfo>=3.3.0
[?25l Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
[K |████████████████████████████████| 102kB 13.5MB/s
[?25hRequirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.7.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.3.0)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.8.3)
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.21.0)
Collecting pybind11>=2.4
[?25l Downloading https://files.pythonhosted.org/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K |████████████████████████████████| 296kB 47.9MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.3)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.24.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.23.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.3.2)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (20.2.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (4.41.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.17.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.2.2)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-nightly->tflite-model-maker) (0.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2.8.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (2020.6.20)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (4.0.1)
Requirement already satisfied: slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (0.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.24.3)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.4.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (4.6)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.17.4)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.2.8)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (0.10.0)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval->tf-models-nightly->tflite-model-maker) (2.4.3)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (3.0.1)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-nightly->tflite-model-maker) (2.7.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow-datasets>=2.1.0->tflite-model-maker) (1.52.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (4.1.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.3)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.16.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Building wheels for collected packages: fire, pyyaml, seqeval, py-cpuinfo
Building wheel for fire (setup.py) ... [?25l[?25hdone
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=f0b82e6b31e21d6db3591478a37188c727533acefe415b16b456c85ef9bef47c
Stored in directory: /root/.cache/pip/wheels/c1/61/df/768b03527bf006b546dce284eb4249b185669e65afc5fbb2ac
Building wheel for pyyaml (setup.py) ... [?25l[?25hdone
Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44619 sha256=cdbc63ead8369d7403f47b1adff163ebde2636c9f0c2a5ebd6413d156b2b7a9f
Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7423 sha256=3ac4a1cc3b88a9b1a1ed8217f2b8d3abb7f936e853383025888b94019d98a856
Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20071 sha256=b5491e6fcabbf9ae464c0def53ec6ec27bbf01230ff96f4e34c6a7c44d55d5c9
Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built fire pyyaml seqeval py-cpuinfo
Installing collected packages: tb-nightly, flatbuffers, tf-estimator-nightly, tf-nightly, pyyaml, tensorflow-model-optimization, opencv-python-headless, sentencepiece, tf-slim, seqeval, py-cpuinfo, tf-models-nightly, fire, pybind11, tflite-support, tflite-model-maker
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed fire-0.3.1 flatbuffers-1.12 opencv-python-headless-4.4.0.42 py-cpuinfo-7.0.0 pybind11-2.5.0 pyyaml-5.3.1 sentencepiece-0.1.91 seqeval-0.0.12 tb-nightly-2.4.0a20200914 tensorflow-model-optimization-0.5.0 tf-estimator-nightly-2.4.0.dev2020091401 tf-models-nightly-2.3.0.dev20200914 tf-nightly-2.4.0.dev20200914 tf-slim-1.1.0 tflite-model-maker-0.1.2 tflite-support-0.1.0rc3.dev2
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json
32571392/32570663 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json
1171456/1167744 [==============================] - 0s 0us/step
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
INFO:tensorflow:Retraining the models...
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config and save the vocabulary to a vocab file. The default TFLite model filename is `model.tflite`, and the default vocab filename is `vocab`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file and vocab file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app by downloading it from the left sidebar on Colab. You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The [TensorFlow Lite Model Maker library](https://www.tensorflow.org/lite/guide/model_maker) simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker-nightly
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the trained model to TensorFlow Lite model format with [metadata](https://www.tensorflow.org/lite/convert/metadata) so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is `model.tflite`.In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
###Code
model.export(export_dir='.')
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config and save the vocabulary to a vocab file. The default TFLite model filename is `model.tflite`, and the default vocab filename is `vocab`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file and vocab file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app by downloading it from the left sidebar on Colab. You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install -q tflite-model-maker-nightly
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.DEFAULT])
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
Collecting tflite-model-maker
[?25l Downloading https://files.pythonhosted.org/packages/13/bc/4c23b9cb9ef612a1f48bac5543bd531665de5eab8f8231111aac067f8c30/tflite_model_maker-0.1.2-py3-none-any.whl (104kB)
[K |███▏ | 10kB 28.4MB/s eta 0:00:01
[K |██████▎ | 20kB 1.8MB/s eta 0:00:01
[K |█████████▍ | 30kB 2.4MB/s eta 0:00:01
[K |████████████▋ | 40kB 2.7MB/s eta 0:00:01
[K |███████████████▊ | 51kB 2.1MB/s eta 0:00:01
[K |██████████████████▉ | 61kB 2.4MB/s eta 0:00:01
[K |██████████████████████ | 71kB 2.7MB/s eta 0:00:01
[K |█████████████████████████▏ | 81kB 2.9MB/s eta 0:00:01
[K |████████████████████████████▎ | 92kB 3.1MB/s eta 0:00:01
[K |███████████████████████████████▌| 102kB 3.0MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 3.0MB/s
[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.10.0)
Collecting tf-nightly
[?25l Downloading https://files.pythonhosted.org/packages/33/d4/61c47ae889b490b9c5f07f4f61bdc057c158a1a1979c375fa019d647a19e/tf_nightly-2.4.0.dev20200914-cp36-cp36m-manylinux2010_x86_64.whl (390.1MB)
[K |████████████████████████████████| 390.2MB 43kB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (1.18.5)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (7.0.0)
Collecting tf-models-nightly
[?25l Downloading https://files.pythonhosted.org/packages/d3/e9/c4e5a451c268a5a75a27949562364f6086f6bb33b226a065a8beceefa9ba/tf_models_nightly-2.3.0.dev20200914-py2.py3-none-any.whl (993kB)
[K |████████████████████████████████| 1.0MB 57.6MB/s
[?25hCollecting flatbuffers==1.12
Downloading https://files.pythonhosted.org/packages/eb/26/712e578c5f14e26ae3314c39a1bdc4eb2ec2f4ddc89b708cf8e0a0d20423/flatbuffers-1.12-py2.py3-none-any.whl
Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.9.0)
Collecting fire
[?25l Downloading https://files.pythonhosted.org/packages/34/a7/0e22e70778aca01a52b9c899d9c145c6396d7b613719cd63db97ffa13f2f/fire-0.3.1.tar.gz (81kB)
[K |████████████████████████████████| 81kB 11.5MB/s
[?25hCollecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 50.9MB/s
[?25hCollecting tflite-support==0.1.0rc3.dev2
[?25l Downloading https://files.pythonhosted.org/packages/fa/c5/5e9ee3abd5b4ef8294432cd714407f49a66befa864905b66ee8bdc612795/tflite_support-0.1.0rc3.dev2-cp36-cp36m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 50.9MB/s
[?25hRequirement already satisfied: tensorflow-datasets>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (2.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->tflite-model-maker) (1.15.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.3.0)
Collecting tb-nightly<3.0.0a0,>=2.4.0a0
[?25l Downloading https://files.pythonhosted.org/packages/fc/cb/4dfe0d65bffb5e9663261ff664e6f5a2d37672b31dae27a0f14721ac00d3/tb_nightly-2.4.0a20200914-py3-none-any.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 51.4MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.7.4.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.35.1)
Collecting tf-estimator-nightly
[?25l Downloading https://files.pythonhosted.org/packages/bd/9a/3bfb9994eda11e426c809ebdf434e2ac5824a0784d980018bb53fd1620ec/tf_estimator_nightly-2.4.0.dev2020091401-py2.py3-none-any.whl (460kB)
[K |████████████████████████████████| 460kB 36.0MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (2.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.32.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.3.3)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.6.3)
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.4.1)
Collecting pyyaml>=5.1
[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)
[K |████████████████████████████████| 276kB 59.8MB/s
[?25hCollecting tensorflow-model-optimization>=0.4.1
[?25l Downloading https://files.pythonhosted.org/packages/55/38/4fd48ea1bfcb0b6e36d949025200426fe9c3a8bfae029f0973d85518fa5a/tensorflow_model_optimization-0.5.0-py2.py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 51.0MB/s
[?25hRequirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.0.5)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.7)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.29.21)
Collecting opencv-python-headless
[?25l Downloading https://files.pythonhosted.org/packages/b6/2a/496e06fd289c01dc21b11970be1261c87ce1cc22d5340c14b516160822a7/opencv_python_headless-4.4.0.42-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
[K |████████████████████████████████| 36.6MB 83kB/s
[?25hRequirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.5.8)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (2.0.2)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (4.1.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (3.2.2)
Collecting tf-slim>=1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
[K |████████████████████████████████| 358kB 55.9MB/s
[?25hCollecting seqeval
Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (5.4.8)
Collecting py-cpuinfo>=3.3.0
[?25l Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
[K |████████████████████████████████| 102kB 13.5MB/s
[?25hRequirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.7.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.3.0)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.8.3)
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.21.0)
Collecting pybind11>=2.4
[?25l Downloading https://files.pythonhosted.org/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K |████████████████████████████████| 296kB 47.9MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.3)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.24.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.23.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.3.2)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (20.2.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (4.41.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.17.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.2.2)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-nightly->tflite-model-maker) (0.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2.8.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (2020.6.20)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (4.0.1)
Requirement already satisfied: slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (0.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.24.3)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.4.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (4.6)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.17.4)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.2.8)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (0.10.0)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval->tf-models-nightly->tflite-model-maker) (2.4.3)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (3.0.1)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-nightly->tflite-model-maker) (2.7.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow-datasets>=2.1.0->tflite-model-maker) (1.52.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (4.1.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.3)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.16.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Building wheels for collected packages: fire, pyyaml, seqeval, py-cpuinfo
Building wheel for fire (setup.py) ... [?25l[?25hdone
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=f0b82e6b31e21d6db3591478a37188c727533acefe415b16b456c85ef9bef47c
Stored in directory: /root/.cache/pip/wheels/c1/61/df/768b03527bf006b546dce284eb4249b185669e65afc5fbb2ac
Building wheel for pyyaml (setup.py) ... [?25l[?25hdone
Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44619 sha256=cdbc63ead8369d7403f47b1adff163ebde2636c9f0c2a5ebd6413d156b2b7a9f
Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7423 sha256=3ac4a1cc3b88a9b1a1ed8217f2b8d3abb7f936e853383025888b94019d98a856
Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20071 sha256=b5491e6fcabbf9ae464c0def53ec6ec27bbf01230ff96f4e34c6a7c44d55d5c9
Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built fire pyyaml seqeval py-cpuinfo
Installing collected packages: tb-nightly, flatbuffers, tf-estimator-nightly, tf-nightly, pyyaml, tensorflow-model-optimization, opencv-python-headless, sentencepiece, tf-slim, seqeval, py-cpuinfo, tf-models-nightly, fire, pybind11, tflite-support, tflite-model-maker
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed fire-0.3.1 flatbuffers-1.12 opencv-python-headless-4.4.0.42 py-cpuinfo-7.0.0 pybind11-2.5.0 pyyaml-5.3.1 sentencepiece-0.1.91 seqeval-0.0.12 tb-nightly-2.4.0a20200914 tensorflow-model-optimization-0.5.0 tf-estimator-nightly-2.4.0.dev2020091401 tf-models-nightly-2.3.0.dev20200914 tf-nightly-2.4.0.dev20200914 tf-slim-1.1.0 tflite-model-maker-0.1.2 tflite-support-0.1.0rc3.dev2
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json
32571392/32570663 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json
1171456/1167744 [==============================] - 0s 0us/step
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
INFO:tensorflow:Retraining the models...
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The [TensorFlow Lite Model Maker library](https://www.tensorflow.org/lite/guide/model_maker) simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install -q tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = QuantizationConfig.for_dynamic()
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install -q tflite-model-maker-nightly
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = QuantizationConfig.for_dynamic()
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install -q tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = QuantizationConfig.for_dynamic()
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
Collecting tflite-model-maker
[?25l Downloading https://files.pythonhosted.org/packages/13/bc/4c23b9cb9ef612a1f48bac5543bd531665de5eab8f8231111aac067f8c30/tflite_model_maker-0.1.2-py3-none-any.whl (104kB)
[K |███▏ | 10kB 28.4MB/s eta 0:00:01
[K |██████▎ | 20kB 1.8MB/s eta 0:00:01
[K |█████████▍ | 30kB 2.4MB/s eta 0:00:01
[K |████████████▋ | 40kB 2.7MB/s eta 0:00:01
[K |███████████████▊ | 51kB 2.1MB/s eta 0:00:01
[K |██████████████████▉ | 61kB 2.4MB/s eta 0:00:01
[K |██████████████████████ | 71kB 2.7MB/s eta 0:00:01
[K |█████████████████████████▏ | 81kB 2.9MB/s eta 0:00:01
[K |████████████████████████████▎ | 92kB 3.1MB/s eta 0:00:01
[K |███████████████████████████████▌| 102kB 3.0MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 3.0MB/s
[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.10.0)
Collecting tf-nightly
[?25l Downloading https://files.pythonhosted.org/packages/33/d4/61c47ae889b490b9c5f07f4f61bdc057c158a1a1979c375fa019d647a19e/tf_nightly-2.4.0.dev20200914-cp36-cp36m-manylinux2010_x86_64.whl (390.1MB)
[K |████████████████████████████████| 390.2MB 43kB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (1.18.5)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (7.0.0)
Collecting tf-models-nightly
[?25l Downloading https://files.pythonhosted.org/packages/d3/e9/c4e5a451c268a5a75a27949562364f6086f6bb33b226a065a8beceefa9ba/tf_models_nightly-2.3.0.dev20200914-py2.py3-none-any.whl (993kB)
[K |████████████████████████████████| 1.0MB 57.6MB/s
[?25hCollecting flatbuffers==1.12
Downloading https://files.pythonhosted.org/packages/eb/26/712e578c5f14e26ae3314c39a1bdc4eb2ec2f4ddc89b708cf8e0a0d20423/flatbuffers-1.12-py2.py3-none-any.whl
Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.9.0)
Collecting fire
[?25l Downloading https://files.pythonhosted.org/packages/34/a7/0e22e70778aca01a52b9c899d9c145c6396d7b613719cd63db97ffa13f2f/fire-0.3.1.tar.gz (81kB)
[K |████████████████████████████████| 81kB 11.5MB/s
[?25hCollecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 50.9MB/s
[?25hCollecting tflite-support==0.1.0rc3.dev2
[?25l Downloading https://files.pythonhosted.org/packages/fa/c5/5e9ee3abd5b4ef8294432cd714407f49a66befa864905b66ee8bdc612795/tflite_support-0.1.0rc3.dev2-cp36-cp36m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 50.9MB/s
[?25hRequirement already satisfied: tensorflow-datasets>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (2.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->tflite-model-maker) (1.15.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.3.0)
Collecting tb-nightly<3.0.0a0,>=2.4.0a0
[?25l Downloading https://files.pythonhosted.org/packages/fc/cb/4dfe0d65bffb5e9663261ff664e6f5a2d37672b31dae27a0f14721ac00d3/tb_nightly-2.4.0a20200914-py3-none-any.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 51.4MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.7.4.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.35.1)
Collecting tf-estimator-nightly
[?25l Downloading https://files.pythonhosted.org/packages/bd/9a/3bfb9994eda11e426c809ebdf434e2ac5824a0784d980018bb53fd1620ec/tf_estimator_nightly-2.4.0.dev2020091401-py2.py3-none-any.whl (460kB)
[K |████████████████████████████████| 460kB 36.0MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (2.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.32.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.3.3)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.6.3)
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.4.1)
Collecting pyyaml>=5.1
[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)
[K |████████████████████████████████| 276kB 59.8MB/s
[?25hCollecting tensorflow-model-optimization>=0.4.1
[?25l Downloading https://files.pythonhosted.org/packages/55/38/4fd48ea1bfcb0b6e36d949025200426fe9c3a8bfae029f0973d85518fa5a/tensorflow_model_optimization-0.5.0-py2.py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 51.0MB/s
[?25hRequirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.0.5)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.7)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.29.21)
Collecting opencv-python-headless
[?25l Downloading https://files.pythonhosted.org/packages/b6/2a/496e06fd289c01dc21b11970be1261c87ce1cc22d5340c14b516160822a7/opencv_python_headless-4.4.0.42-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
[K |████████████████████████████████| 36.6MB 83kB/s
[?25hRequirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.5.8)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (2.0.2)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (4.1.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (3.2.2)
Collecting tf-slim>=1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
[K |████████████████████████████████| 358kB 55.9MB/s
[?25hCollecting seqeval
Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (5.4.8)
Collecting py-cpuinfo>=3.3.0
[?25l Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
[K |████████████████████████████████| 102kB 13.5MB/s
[?25hRequirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.7.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.3.0)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.8.3)
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.21.0)
Collecting pybind11>=2.4
[?25l Downloading https://files.pythonhosted.org/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K |████████████████████████████████| 296kB 47.9MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.3)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.24.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.23.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.3.2)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (20.2.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (4.41.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.17.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.2.2)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-nightly->tflite-model-maker) (0.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2.8.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (2020.6.20)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (4.0.1)
Requirement already satisfied: slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (0.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.24.3)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.4.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (4.6)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.17.4)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.2.8)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (0.10.0)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval->tf-models-nightly->tflite-model-maker) (2.4.3)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (3.0.1)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-nightly->tflite-model-maker) (2.7.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow-datasets>=2.1.0->tflite-model-maker) (1.52.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (4.1.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.3)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.16.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Building wheels for collected packages: fire, pyyaml, seqeval, py-cpuinfo
Building wheel for fire (setup.py) ... [?25l[?25hdone
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=f0b82e6b31e21d6db3591478a37188c727533acefe415b16b456c85ef9bef47c
Stored in directory: /root/.cache/pip/wheels/c1/61/df/768b03527bf006b546dce284eb4249b185669e65afc5fbb2ac
Building wheel for pyyaml (setup.py) ... [?25l[?25hdone
Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44619 sha256=cdbc63ead8369d7403f47b1adff163ebde2636c9f0c2a5ebd6413d156b2b7a9f
Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7423 sha256=3ac4a1cc3b88a9b1a1ed8217f2b8d3abb7f936e853383025888b94019d98a856
Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20071 sha256=b5491e6fcabbf9ae464c0def53ec6ec27bbf01230ff96f4e34c6a7c44d55d5c9
Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built fire pyyaml seqeval py-cpuinfo
Installing collected packages: tb-nightly, flatbuffers, tf-estimator-nightly, tf-nightly, pyyaml, tensorflow-model-optimization, opencv-python-headless, sentencepiece, tf-slim, seqeval, py-cpuinfo, tf-models-nightly, fire, pybind11, tflite-support, tflite-model-maker
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed fire-0.3.1 flatbuffers-1.12 opencv-python-headless-4.4.0.42 py-cpuinfo-7.0.0 pybind11-2.5.0 pyyaml-5.3.1 sentencepiece-0.1.91 seqeval-0.0.12 tb-nightly-2.4.0a20200914 tensorflow-model-optimization-0.5.0 tf-estimator-nightly-2.4.0.dev2020091401 tf-models-nightly-2.3.0.dev20200914 tf-nightly-2.4.0.dev20200914 tf-slim-1.1.0 tflite-model-maker-0.1.2 tflite-support-0.1.0rc3.dev2
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json
32571392/32570663 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json
1171456/1167744 [==============================] - 0s 0us/step
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
INFO:tensorflow:Retraining the models...
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
Collecting tflite-model-maker
[?25l Downloading https://files.pythonhosted.org/packages/13/bc/4c23b9cb9ef612a1f48bac5543bd531665de5eab8f8231111aac067f8c30/tflite_model_maker-0.1.2-py3-none-any.whl (104kB)
[K |███▏ | 10kB 28.4MB/s eta 0:00:01
[K |██████▎ | 20kB 1.8MB/s eta 0:00:01
[K |█████████▍ | 30kB 2.4MB/s eta 0:00:01
[K |████████████▋ | 40kB 2.7MB/s eta 0:00:01
[K |███████████████▊ | 51kB 2.1MB/s eta 0:00:01
[K |██████████████████▉ | 61kB 2.4MB/s eta 0:00:01
[K |██████████████████████ | 71kB 2.7MB/s eta 0:00:01
[K |█████████████████████████▏ | 81kB 2.9MB/s eta 0:00:01
[K |████████████████████████████▎ | 92kB 3.1MB/s eta 0:00:01
[K |███████████████████████████████▌| 102kB 3.0MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 3.0MB/s
[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.10.0)
Collecting tf-nightly
[?25l Downloading https://files.pythonhosted.org/packages/33/d4/61c47ae889b490b9c5f07f4f61bdc057c158a1a1979c375fa019d647a19e/tf_nightly-2.4.0.dev20200914-cp36-cp36m-manylinux2010_x86_64.whl (390.1MB)
[K |████████████████████████████████| 390.2MB 43kB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (1.18.5)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (7.0.0)
Collecting tf-models-nightly
[?25l Downloading https://files.pythonhosted.org/packages/d3/e9/c4e5a451c268a5a75a27949562364f6086f6bb33b226a065a8beceefa9ba/tf_models_nightly-2.3.0.dev20200914-py2.py3-none-any.whl (993kB)
[K |████████████████████████████████| 1.0MB 57.6MB/s
[?25hCollecting flatbuffers==1.12
Downloading https://files.pythonhosted.org/packages/eb/26/712e578c5f14e26ae3314c39a1bdc4eb2ec2f4ddc89b708cf8e0a0d20423/flatbuffers-1.12-py2.py3-none-any.whl
Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.9.0)
Collecting fire
[?25l Downloading https://files.pythonhosted.org/packages/34/a7/0e22e70778aca01a52b9c899d9c145c6396d7b613719cd63db97ffa13f2f/fire-0.3.1.tar.gz (81kB)
[K |████████████████████████████████| 81kB 11.5MB/s
[?25hCollecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 50.9MB/s
[?25hCollecting tflite-support==0.1.0rc3.dev2
[?25l Downloading https://files.pythonhosted.org/packages/fa/c5/5e9ee3abd5b4ef8294432cd714407f49a66befa864905b66ee8bdc612795/tflite_support-0.1.0rc3.dev2-cp36-cp36m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 50.9MB/s
[?25hRequirement already satisfied: tensorflow-datasets>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (2.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->tflite-model-maker) (1.15.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.3.0)
Collecting tb-nightly<3.0.0a0,>=2.4.0a0
[?25l Downloading https://files.pythonhosted.org/packages/fc/cb/4dfe0d65bffb5e9663261ff664e6f5a2d37672b31dae27a0f14721ac00d3/tb_nightly-2.4.0a20200914-py3-none-any.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 51.4MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.7.4.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.35.1)
Collecting tf-estimator-nightly
[?25l Downloading https://files.pythonhosted.org/packages/bd/9a/3bfb9994eda11e426c809ebdf434e2ac5824a0784d980018bb53fd1620ec/tf_estimator_nightly-2.4.0.dev2020091401-py2.py3-none-any.whl (460kB)
[K |████████████████████████████████| 460kB 36.0MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (2.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.32.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.3.3)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.6.3)
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.4.1)
Collecting pyyaml>=5.1
[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)
[K |████████████████████████████████| 276kB 59.8MB/s
[?25hCollecting tensorflow-model-optimization>=0.4.1
[?25l Downloading https://files.pythonhosted.org/packages/55/38/4fd48ea1bfcb0b6e36d949025200426fe9c3a8bfae029f0973d85518fa5a/tensorflow_model_optimization-0.5.0-py2.py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 51.0MB/s
[?25hRequirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.0.5)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.7)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.29.21)
Collecting opencv-python-headless
[?25l Downloading https://files.pythonhosted.org/packages/b6/2a/496e06fd289c01dc21b11970be1261c87ce1cc22d5340c14b516160822a7/opencv_python_headless-4.4.0.42-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
[K |████████████████████████████████| 36.6MB 83kB/s
[?25hRequirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.5.8)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (2.0.2)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (4.1.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (3.2.2)
Collecting tf-slim>=1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
[K |████████████████████████████████| 358kB 55.9MB/s
[?25hCollecting seqeval
Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (5.4.8)
Collecting py-cpuinfo>=3.3.0
[?25l Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
[K |████████████████████████████████| 102kB 13.5MB/s
[?25hRequirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.7.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.3.0)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.8.3)
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.21.0)
Collecting pybind11>=2.4
[?25l Downloading https://files.pythonhosted.org/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K |████████████████████████████████| 296kB 47.9MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.3)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.24.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.23.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.3.2)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (20.2.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (4.41.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.17.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.2.2)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-nightly->tflite-model-maker) (0.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2.8.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (2020.6.20)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (4.0.1)
Requirement already satisfied: slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (0.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.24.3)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.4.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (4.6)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.17.4)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.2.8)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (0.10.0)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval->tf-models-nightly->tflite-model-maker) (2.4.3)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (3.0.1)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-nightly->tflite-model-maker) (2.7.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow-datasets>=2.1.0->tflite-model-maker) (1.52.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (4.1.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.3)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.16.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Building wheels for collected packages: fire, pyyaml, seqeval, py-cpuinfo
Building wheel for fire (setup.py) ... [?25l[?25hdone
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=f0b82e6b31e21d6db3591478a37188c727533acefe415b16b456c85ef9bef47c
Stored in directory: /root/.cache/pip/wheels/c1/61/df/768b03527bf006b546dce284eb4249b185669e65afc5fbb2ac
Building wheel for pyyaml (setup.py) ... [?25l[?25hdone
Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44619 sha256=cdbc63ead8369d7403f47b1adff163ebde2636c9f0c2a5ebd6413d156b2b7a9f
Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7423 sha256=3ac4a1cc3b88a9b1a1ed8217f2b8d3abb7f936e853383025888b94019d98a856
Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20071 sha256=b5491e6fcabbf9ae464c0def53ec6ec27bbf01230ff96f4e34c6a7c44d55d5c9
Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built fire pyyaml seqeval py-cpuinfo
Installing collected packages: tb-nightly, flatbuffers, tf-estimator-nightly, tf-nightly, pyyaml, tensorflow-model-optimization, opencv-python-headless, sentencepiece, tf-slim, seqeval, py-cpuinfo, tf-models-nightly, fire, pybind11, tflite-support, tflite-model-maker
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed fire-0.3.1 flatbuffers-1.12 opencv-python-headless-4.4.0.42 py-cpuinfo-7.0.0 pybind11-2.5.0 pyyaml-5.3.1 sentencepiece-0.1.91 seqeval-0.0.12 tb-nightly-2.4.0a20200914 tensorflow-model-optimization-0.5.0 tf-estimator-nightly-2.4.0.dev2020091401 tf-models-nightly-2.3.0.dev20200914 tf-nightly-2.4.0.dev20200914 tf-slim-1.1.0 tflite-model-maker-0.1.2 tflite-support-0.1.0rc3.dev2
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json
32571392/32570663 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json
1171456/1167744 [==============================] - 0s 0us/step
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
INFO:tensorflow:Retraining the models...
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config.experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
BERT Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The [TensorFlow Lite Model Maker library](https://www.tensorflow.org/lite/guide/model_maker) simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to BERT Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install -q tflite-model-maker
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `DataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the trained model to TensorFlow Lite model format with [metadata](https://www.tensorflow.org/lite/convert/metadata) so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is `model.tflite`.In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
###Code
model.export(export_dir='.')
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evaluate the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQASpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQASpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install tflite-model-maker
###Output
Collecting tflite-model-maker
[?25l Downloading https://files.pythonhosted.org/packages/13/bc/4c23b9cb9ef612a1f48bac5543bd531665de5eab8f8231111aac067f8c30/tflite_model_maker-0.1.2-py3-none-any.whl (104kB)
[K |███▏ | 10kB 28.4MB/s eta 0:00:01
[K |██████▎ | 20kB 1.8MB/s eta 0:00:01
[K |█████████▍ | 30kB 2.4MB/s eta 0:00:01
[K |████████████▋ | 40kB 2.7MB/s eta 0:00:01
[K |███████████████▊ | 51kB 2.1MB/s eta 0:00:01
[K |██████████████████▉ | 61kB 2.4MB/s eta 0:00:01
[K |██████████████████████ | 71kB 2.7MB/s eta 0:00:01
[K |█████████████████████████▏ | 81kB 2.9MB/s eta 0:00:01
[K |████████████████████████████▎ | 92kB 3.1MB/s eta 0:00:01
[K |███████████████████████████████▌| 102kB 3.0MB/s eta 0:00:01
[K |████████████████████████████████| 112kB 3.0MB/s
[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.10.0)
Collecting tf-nightly
[?25l Downloading https://files.pythonhosted.org/packages/33/d4/61c47ae889b490b9c5f07f4f61bdc057c158a1a1979c375fa019d647a19e/tf_nightly-2.4.0.dev20200914-cp36-cp36m-manylinux2010_x86_64.whl (390.1MB)
[K |████████████████████████████████| 390.2MB 43kB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (1.18.5)
Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (7.0.0)
Collecting tf-models-nightly
[?25l Downloading https://files.pythonhosted.org/packages/d3/e9/c4e5a451c268a5a75a27949562364f6086f6bb33b226a065a8beceefa9ba/tf_models_nightly-2.3.0.dev20200914-py2.py3-none-any.whl (993kB)
[K |████████████████████████████████| 1.0MB 57.6MB/s
[?25hCollecting flatbuffers==1.12
Downloading https://files.pythonhosted.org/packages/eb/26/712e578c5f14e26ae3314c39a1bdc4eb2ec2f4ddc89b708cf8e0a0d20423/flatbuffers-1.12-py2.py3-none-any.whl
Requirement already satisfied: tensorflow-hub>=0.8.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (0.9.0)
Collecting fire
[?25l Downloading https://files.pythonhosted.org/packages/34/a7/0e22e70778aca01a52b9c899d9c145c6396d7b613719cd63db97ffa13f2f/fire-0.3.1.tar.gz (81kB)
[K |████████████████████████████████| 81kB 11.5MB/s
[?25hCollecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 50.9MB/s
[?25hCollecting tflite-support==0.1.0rc3.dev2
[?25l Downloading https://files.pythonhosted.org/packages/fa/c5/5e9ee3abd5b4ef8294432cd714407f49a66befa864905b66ee8bdc612795/tflite_support-0.1.0rc3.dev2-cp36-cp36m-manylinux2010_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 50.9MB/s
[?25hRequirement already satisfied: tensorflow-datasets>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tflite-model-maker) (2.1.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->tflite-model-maker) (1.15.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.3.0)
Collecting tb-nightly<3.0.0a0,>=2.4.0a0
[?25l Downloading https://files.pythonhosted.org/packages/fc/cb/4dfe0d65bffb5e9663261ff664e6f5a2d37672b31dae27a0f14721ac00d3/tb_nightly-2.4.0a20200914-py3-none-any.whl (10.1MB)
[K |████████████████████████████████| 10.1MB 51.4MB/s
[?25hRequirement already satisfied: typing-extensions>=3.7.4.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.7.4.3)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.35.1)
Collecting tf-estimator-nightly
[?25l Downloading https://files.pythonhosted.org/packages/bd/9a/3bfb9994eda11e426c809ebdf434e2ac5824a0784d980018bb53fd1620ec/tf_estimator_nightly-2.4.0.dev2020091401-py2.py3-none-any.whl (460kB)
[K |████████████████████████████████| 460kB 36.0MB/s
[?25hRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (2.10.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.1.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.32.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (0.3.3)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tf-nightly->tflite-model-maker) (1.6.3)
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.4.1)
Collecting pyyaml>=5.1
[?25l Downloading https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz (269kB)
[K |████████████████████████████████| 276kB 59.8MB/s
[?25hCollecting tensorflow-model-optimization>=0.4.1
[?25l Downloading https://files.pythonhosted.org/packages/55/38/4fd48ea1bfcb0b6e36d949025200426fe9c3a8bfae029f0973d85518fa5a/tensorflow_model_optimization-0.5.0-py2.py3-none-any.whl (172kB)
[K |████████████████████████████████| 174kB 51.0MB/s
[?25hRequirement already satisfied: pandas>=0.22.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.0.5)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.7)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.29.21)
Collecting opencv-python-headless
[?25l Downloading https://files.pythonhosted.org/packages/b6/2a/496e06fd289c01dc21b11970be1261c87ce1cc22d5340c14b516160822a7/opencv_python_headless-4.4.0.42-cp36-cp36m-manylinux2014_x86_64.whl (36.6MB)
[K |████████████████████████████████| 36.6MB 83kB/s
[?25hRequirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.5.8)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (2.0.2)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (4.1.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (3.2.2)
Collecting tf-slim>=1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/02/97/b0f4a64df018ca018cc035d44f2ef08f91e2e8aa67271f6f19633a015ff7/tf_slim-1.1.0-py2.py3-none-any.whl (352kB)
[K |████████████████████████████████| 358kB 55.9MB/s
[?25hCollecting seqeval
Downloading https://files.pythonhosted.org/packages/34/91/068aca8d60ce56dd9ba4506850e876aba5e66a6f2f29aa223224b50df0de/seqeval-0.0.12.tar.gz
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (5.4.8)
Collecting py-cpuinfo>=3.3.0
[?25l Downloading https://files.pythonhosted.org/packages/f6/f5/8e6e85ce2e9f6e05040cf0d4e26f43a4718bcc4bce988b433276d4b1a5c1/py-cpuinfo-7.0.0.tar.gz (95kB)
[K |████████████████████████████████| 102kB 13.5MB/s
[?25hRequirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.7.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.3.0)
Requirement already satisfied: tensorflow-addons in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (0.8.3)
Requirement already satisfied: google-cloud-bigquery>=0.31.0 in /usr/local/lib/python3.6/dist-packages (from tf-models-nightly->tflite-model-maker) (1.21.0)
Collecting pybind11>=2.4
[?25l Downloading https://files.pythonhosted.org/packages/89/e3/d576f6f02bc75bacbc3d42494e8f1d063c95617d86648dba243c2cb3963e/pybind11-2.5.0-py2.py3-none-any.whl (296kB)
[K |████████████████████████████████| 296kB 47.9MB/s
[?25hRequirement already satisfied: promise in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.3)
Requirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.24.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (2.23.0)
Requirement already satisfied: dill in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.3.2)
Requirement already satisfied: attrs>=18.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (20.2.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (4.41.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from tensorflow-datasets>=2.1.0->tflite-model-maker) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.17.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.2.2)
Requirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-model-optimization>=0.4.1->tf-models-nightly->tflite-model-maker) (0.1.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.22.0->tf-models-nightly->tflite-model-maker) (2.8.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (2020.6.20)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (4.0.1)
Requirement already satisfied: slugify in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (0.0.1)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.24.3)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.4.8)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (4.6)
Requirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.17.4)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->tf-models-nightly->tflite-model-maker) (0.2.8)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->tf-models-nightly->tflite-model-maker) (0.10.0)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.6/dist-packages (from seqeval->tf-models-nightly->tflite-model-maker) (2.4.3)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (0.0.4)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.6.7->tf-models-nightly->tflite-model-maker) (3.0.1)
Requirement already satisfied: typeguard in /usr/local/lib/python3.6/dist-packages (from tensorflow-addons->tf-models-nightly->tflite-model-maker) (2.7.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.0.3 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.0.3)
Requirement already satisfied: google-resumable-media!=0.4.0,<0.5.0dev,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (0.4.1)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-metadata->tensorflow-datasets>=2.1.0->tflite-model-maker) (1.52.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19.0->tensorflow-datasets>=2.1.0->tflite-model-maker) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.3.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (4.1.1)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (1.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle>=1.3.9->tf-models-nightly->tflite-model-maker) (1.3)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from google-cloud-core<2.0dev,>=1.0.3->google-cloud-bigquery>=0.31.0->tf-models-nightly->tflite-model-maker) (1.16.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tb-nightly<3.0.0a0,>=2.4.0a0->tf-nightly->tflite-model-maker) (3.1.0)
Building wheels for collected packages: fire, pyyaml, seqeval, py-cpuinfo
Building wheel for fire (setup.py) ... [?25l[?25hdone
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=f0b82e6b31e21d6db3591478a37188c727533acefe415b16b456c85ef9bef47c
Stored in directory: /root/.cache/pip/wheels/c1/61/df/768b03527bf006b546dce284eb4249b185669e65afc5fbb2ac
Building wheel for pyyaml (setup.py) ... [?25l[?25hdone
Created wheel for pyyaml: filename=PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl size=44619 sha256=cdbc63ead8369d7403f47b1adff163ebde2636c9f0c2a5ebd6413d156b2b7a9f
Stored in directory: /root/.cache/pip/wheels/a7/c1/ea/cf5bd31012e735dc1dfea3131a2d5eae7978b251083d6247bd
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-cp36-none-any.whl size=7423 sha256=3ac4a1cc3b88a9b1a1ed8217f2b8d3abb7f936e853383025888b94019d98a856
Stored in directory: /root/.cache/pip/wheels/4f/32/0a/df3b340a82583566975377d65e724895b3fad101a3fb729f68
Building wheel for py-cpuinfo (setup.py) ... [?25l[?25hdone
Created wheel for py-cpuinfo: filename=py_cpuinfo-7.0.0-cp36-none-any.whl size=20071 sha256=b5491e6fcabbf9ae464c0def53ec6ec27bbf01230ff96f4e34c6a7c44d55d5c9
Stored in directory: /root/.cache/pip/wheels/f1/93/7b/127daf0c3a5a49feb2fecd468d508067c733fba5192f726ad1
Successfully built fire pyyaml seqeval py-cpuinfo
Installing collected packages: tb-nightly, flatbuffers, tf-estimator-nightly, tf-nightly, pyyaml, tensorflow-model-optimization, opencv-python-headless, sentencepiece, tf-slim, seqeval, py-cpuinfo, tf-models-nightly, fire, pybind11, tflite-support, tflite-model-maker
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Successfully installed fire-0.3.1 flatbuffers-1.12 opencv-python-headless-4.4.0.42 py-cpuinfo-7.0.0 pybind11-2.5.0 pyyaml-5.3.1 sentencepiece-0.1.91 seqeval-0.0.12 tb-nightly-2.4.0a20200914 tensorflow-model-optimization-0.5.0 tf-estimator-nightly-2.4.0.dev2020091401 tf-models-nightly-2.3.0.dev20200914 tf-nightly-2.4.0.dev20200914 tf-slim-1.1.0 tflite-model-maker-0.1.2 tflite-support-0.1.0rc3.dev2
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json
32571392/32570663 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json
1171456/1167744 [==============================] - 0s 0us/step
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
INFO:tensorflow:Retraining the models...
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config with [metadata](https://www.tensorflow.org/lite/convert/metadata). The default TFLite model filename is `model.tflite`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app using [BertQuestionAnswerer API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_question_answerer) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview) by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following:* `ExportFormat.TFLITE`* `ExportFormat.VOCAB`* `ExportFormat.SAVED_MODEL`By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
###Code
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
###Output
_____no_output_____
###Markdown
You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Question Answer with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. Answers are spans in the passage (image credit: SQuAD blog) As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model.spec = model_spec.get('mobilebert_qa') Gets the training data and validation data.train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model.model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result.metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format in the export directory.model.export(export_dir)``` The following sections explain the code in more detail. PrerequisitesTo run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
###Code
!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.text_dataloader import QuestionAnswerDataLoader
from tensorflow_examples.lite.model_maker.core.task import model_spec
from tensorflow_examples.lite.model_maker.core.task import question_answer
from tensorflow_examples.lite.model_maker.core.task.configs import QuantizationConfig
###Output
_____no_output_____
###Markdown
The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answerEach `model_spec` object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.Supported Model | Name of model_spec | Model Description--- | --- | ---[MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.[MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/).[BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 'bert_qa' | Standard BERT model that widely used in NLP tasks.In this tutorial, [MobileBERT-SQuAD](https://arxiv.org/pdf/2004.02984.pdf) is used as an example. Since the model is already retrained on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/), it could coverage faster for question answer task.
###Code
spec = model_spec.get('mobilebert_qa_squad')
###Output
_____no_output_____
###Markdown
Load Input Data Specific to an On-device ML App and Preprocess the DataThe [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.To load the data, convert the TriviaQA dataset to the [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/) format by running the [converter Python script](https://github.com/mandarjoshi90/triviaqamiscellaneous) with `--sample_size=8000` and a set of `web` data. Modify the conversion code a little bit by:* Skipping the samples that couldn't find any answer in the context document;* Getting the original answer in the context without uppercase or lowercase.Download the archived version of the already converted dataset.
###Code
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
###Output
_____no_output_____
###Markdown
You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.If you prefer not to upload your data to the cloud, you can also run the library offline by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). Use the `QuestionAnswerDataLoader.from_squad` method to load and preprocess the [SQuAD format](https://rajpurkar.github.io/SQuAD-explorer/) data according to a specific `model_spec`. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter `version_2_with_negative` as `True` means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, `version_2_with_negative` is `False`.
###Code
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
###Output
_____no_output_____
###Markdown
Customize the TensorFlow ModelCreate a custom question answer model based on the loaded data. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model. The default epochs and the default batch size are set according to two variables `default_training_epochs` and `default_batch_size` in the `model_spec` object.
###Code
model = question_answer.create(train_data, model_spec=spec)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Evaluate the Customized ModelEvaluate the model on the validation data and get a dict of metrics including `f1` score and `exact match` etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
###Code
model.evaluate(validation_data)
###Output
_____no_output_____
###Markdown
Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
###Code
config = QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
###Output
_____no_output_____
###Markdown
Export the quantized TFLite model according to the quantization config and save the vocabulary to a vocab file. The default TFLite model filename is `model.tflite`, and the default vocab filename is `vocab`.
###Code
model.export(export_dir='.', quantization_config=config)
###Output
_____no_output_____
###Markdown
You can use the TensorFlow Lite model file and vocab file in the [bert_qa](https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android) reference app by downloading it from the left sidebar on Colab. You can also evalute the tflite model with the `evaluate_tflite` method. This step is expected to take a long time.
###Code
model.evaluate_tflite('model.tflite', validation_data)
###Output
_____no_output_____
###Markdown
Advanced UsageThe `create` function is the critical part of this library in which the `model_spec` parameter defines the model specification. The `BertQAModelSpec` class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The `create` function comprises the following steps:1. Creates the model for question answer according to `model_spec`.2. Train the question answer model.This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the modelYou can adjust the model infrastructure like parameters `seq_len` and `query_len` in the `BertQAModelSpec` class.Adjustable parameters for model:* `seq_len`: Length of the passage to feed into the model.* `query_len`: Length of the question to feed into the model.* `doc_stride`: The stride when doing a sliding window approach to take chunks of the documents.* `initializer_range`: The stdev of the truncated_normal_initializer for initializing all weight matrices.* `trainable`: Boolean, whether pre-trained layer is trainable.Adjustable parameters for training pipeline:* `model_dir`: The location of the model checkpoint files. If not set, temporary directory will be used.* `dropout_rate`: The rate for dropout.* `learning_rate`: The initial learning rate for Adam.* `predict_batch_size`: Batch size for prediction.* `tpu`: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new `model_spec`.
###Code
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
###Output
_____no_output_____ |
Classification/LightGBM.ipynb | ###Markdown
위스콘신 Breast Cancer 데이터 로드 및 LightGBM 적용
###Code
from lightgbm import LGBMClassifier
import pandas as pd
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
dataset = load_breast_cancer()
ftr = dataset.data
target = dataset.target
# 학습/ 테스트 데이터 분리
X_train, X_test, y_train, y_test = train_test_split(ftr, target, test_size=0.2, random_state = 216)
# LGBM
lgbm_wrapper = LGBMClassifier(n_estimator=400)
evals = [(X_test, y_test)] # 실제는 검증 데이터로 evaluation해야함
lgbm_wrapper.fit(X_train, y_train, early_stopping_rounds=100, eval_metric="logloss",
eval_set=evals, verbose=True)
preds = lgbm_wrapper.predict(X_test)
pred_proba = lgbm_wrapper.predict_proba(X_test)[:,1]
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import f1_score, roc_auc_score
def get_clf_eval(y_test, pred=None, pred_proba=None):
confusion = confusion_matrix(y_test, pred)
accuracy = accuracy_score(y_test, pred)
precision = precision_score(y_test, pred)
recall = recall_score(y_test, pred)
f1 = f1_score(y_test, pred)
# ROC-AUC
roc_auc = roc_auc_score(y_test, pred_proba)
print('오차행렬')
print(confusion)
print(f"정확도: {accuracy:.4f}, 정밀도: {precision:.4f}, 재현율: {recall:.4f},\
F1: {f1:.4f}, AUC: {roc_auc:.4f}")
get_clf_eval(y_test, preds, pred_proba)
from lightgbm import plot_importance
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(figsize=(10,12))
plot_importance(lgbm_wrapper, ax=ax)
###Output
_____no_output_____ |
samples/Using-TFRecorder-with-Google-Cloud-Dataflow.ipynb | ###Markdown
Using Google Cloud DataFlow with TFRecorderThis notebook demonstrates how to use TFRecorder with Google Cloud DataFlow to scale up to processing any size of dataset. Notebook Setup1. Please install TFRecorder with the command `python setup.py` from the repository root.2. Create a new GCS bucket the command with `gsutil mb gs://your/bucket/name/` and set the BUCKET= constant to that name.3. Copy the test images from the TFRutil repo to the new gcs bucket with the command `gsutil cp -r ./tfrutil/test_data/images gs://<BUCKET_NAME/images`
###Code
import pandas as pd
import tfrecorder
import os
!pip download tfrecorder --no-deps
!cp tfrecorder* /tmp
BUCKET="" # ADD YOUR BUCKET HERE, E.G. "GS://MYBUCKET/"
PROJECT="" # ADD YOUR PROJECT NAME HERE
REGION="" # ADD A COMPUTE REGION HERE
OUTPUT_PATH = "results/"
TFRECORDER_WHEEL = "/tmp/tfrecorder-0.1.1-py3-none-any.whl" #UPDATE VERSION AS NEEDED
df = pd.read_csv("data.csv")
###Output
_____no_output_____
###Markdown
Update image_uri The image_uri column is currently pointing to the local file locations for each test image. We will change this path to the new GCS location below.
###Code
df['image_uri'] = df.image_uri.str.replace("../tfrecorder/", BUCKET)
df
df.tensorflow.to_tfr(output_dir=BUCKET + OUTPUT_PATH,
runner="DataflowRunner",
project=PROJECT,
region=REGION,
tfrecorder_wheel=TFRECORDER_WHEEL)
###Output
_____no_output_____
###Markdown
Using Google Cloud DataFlow with TFRecorderThis notebook demonstrates how to use TFRecorder with Google Cloud DataFlow to scale up to processing any size of dataset. Notebook Setup1. Please install TFRecorder with the command `python setup.py` from the repository root.2. Create a new GCS bucket the command with `gsutil mb gs://your/bucket/name/` and set the BUCKET= constant to that name.3. Copy the test images from the TFRutil repo to the new gcs bucket with the command `gsutil cp -r ./tfrutil/test_data/images gs://<BUCKET_NAME/images`
###Code
import pandas as pd
import tfrecorder
import os
!pip download tfrecorder --no-deps
!cp tfrecorder* /tmp
BUCKET="" # ADD YOUR BUCKET HERE, E.G. "GS://MYBUCKET/"
PROJECT="" # ADD YOUR PROJECT NAME HERE
REGION="" # ADD A COMPUTE REGION HERE
OUTPUT_PATH = "results/"
TFRECORDER_WHEEL = "/tmp/tfrecorder-0.1.1-py3-none-any.whl" #UPDATE VERSION AS NEEDED
df = pd.read_csv("data.csv")
###Output
_____no_output_____
###Markdown
Update image_uri The image_uri column is currently pointing to the local file locations for each test image. We will change this path to the new GCS location below.
###Code
df['image_uri'] = df.image_uri.str.replace("../tfrecorder/", BUCKET)
df
df.tensorflow.to_tfr(output_dir=BUCKET + OUTPUT_PATH,
runner="DataflowRunner",
project=PROJECT,
region=REGION,
tfrecorder_wheel=TFRECORDER_WHEEL)
###Output
_____no_output_____
###Markdown
Using Google Cloud DataFlow with TFRecorderThis notebook demonstrates how to use TFRecorder with Google Cloud DataFlow to scale up to processing any size of dataset. Notebook Setup1. Please install TFRecorder with the command `python setup.py` from the repository root.2. Create a new GCS bucket the command with `gsutil mb gs://your/bucket/name/` and set the BUCKET= constant to that name.3. Copy the test images from the TFRutil repo to the new gcs bucket with the command `gsutil cp -r ./tfrutil/test_data/images gs://<BUCKET_NAME/images`
###Code
import pandas as pd
import tfrecorder
import os
!pip download tfrecorder --no-deps
!cp tfrecorder* /tmp
BUCKET="" # ADD YOUR BUCKET HERE, E.G. "GS://MYBUCKET/"
PROJECT="" # ADD YOUR PROJECT NAME HERE
REGION="" # ADD A COMPUTE REGION HERE
OUTPUT_PATH = "results/"
TFRECORDER_WHEEL = "/tmp/tfrecorder-0.1.1-py3-none-any.whl" #UPDATE VERSION AS NEEDED
df = pd.read_csv("data.csv")
###Output
_____no_output_____
###Markdown
Update image_uri The image_uri column is currently pointing to the local file locations for each test image. We will change this path to the new GCS location below.
###Code
df['image_uri'] = df.image_uri.str.replace("../tfrecorder/", BUCKET)
df
df.tensorflow.to_tfr(output_dir=BUCKET + OUTPUT_PATH,
runner="DataflowRunner",
project=PROJECT,
region=REGION,
tfrecorder_wheel=TFRECORDER_WHEEL)
###Output
_____no_output_____
###Markdown
Using Google Cloud DataFlow with TFRecorderThis notebook demonstrates how to use TFRecorder with Google Cloud DataFlow to scale up to processing any size of dataset. Notebook Setup1. Please install TFRecorder with the command `python setup.py` from the repository root.2. Create a new GCS bucket the command with `gsutil mb gs://your/bucket/name/` and set the BUCKET= constant to that name.3. Copy the test images from the TFRutil repo to the new gcs bucket with the command `gsutil cp -r ./tfrutil/test_data/images gs://<BUCKET_NAME/images`
###Code
import pandas as pd
import tfrecorder
import os
!pip download tfrecorder --no-deps
!cp tfrecorder* /tmp
BUCKET="gs://tfrecorder-output/" # ADD YOUR BUCKET HERE, E.G. "GS://MYBUCKET/"
PROJECT="jared-playground" # ADD YOUR PROJECT NAME HERE
REGION="us-central1" # ADD A COMPUTE REGION HERE
OUTPUT_PATH = "results/"
TFRECORDER_WHEEL = "/home/jupyter/tensorflow-recorder/samples/tfrecorder-2.0-py3-none-any.whl" #UPDATE VERSION AS NEEDED
df = pd.read_csv("/home/jupyter/tensorflow-recorder/tfrecorder/test_data/data.csv")
df['image_uri'][0]
###Output
_____no_output_____
###Markdown
Update image_uri The image_uri column is currently pointing to the local file locations for each test image. We will change this path to the new GCS location below.
###Code
df['image_uri'] = df.image_uri.str.replace("tfrecorder/", BUCKET)
df['image_uri'][0]
df.tensorflow.to_tfr(output_dir=BUCKET + OUTPUT_PATH,
runner="DataflowRunner",
project=PROJECT,
region=REGION,
tfrecorder_wheel=TFRECORDER_WHEEL)
###Output
_____no_output_____
###Markdown
Using Google Cloud DataFlow with TFRUtilThis notebook demonstrates how to use TFRUtil with Google Cloud DataFlow to scale up to processing any size of dataset. Notebook Setup1. Please install TFUtil with the command `python setup.py` from the repository root.2. Create a new GCS bucket the command with `gsutil mb gs://your/bucket/name` and set the BUCKET= constant to that name.3. Copy the test images from the TFRutil repo to the new gcs bucket with the command `gsutil cp -r ./tfrutil/test_data/images gs://<BUCKET_NAME/images`
###Code
import pandas as pd
import tfrutil
BUCKET="" # ADD YOUR BUCKET HERE
PROJECT="" # ADD YOUR PROJECT NAME HERE
REGION="" # ADD A COMPUTE REGION HERE
TFRUTIL_PATH = "" # ADD THE LOCAL PATH TO YOUR CLONE OF THE TFRUTIL REPO HERE
OUTPUT_PATH = "/results/
df = pd.read_csv("data.csv")
df
###Output
_____no_output_____
###Markdown
Update image_uri The image_uri column is currently pointing to the local file locations for each test image. We will change this path to the new GCS location below.
###Code
df['image_uri'] = BUCKET + df.image_uri.str.slice(start=20)
df
df.tensorflow.to_tfr(output_dir=BUCKET + OUTPUT_PATH
runner="DataFlowRunner",
project=PROJECT,
region=REGION,
tfrutil_path=TFRUTIL_PATH)
###Output
_____no_output_____ |
_build/jupyter_execute/rise/03a-matematica-discreta-rise.ipynb | ###Markdown
Fundamentos de Matemática Discreta com Python Matemática Discreta- Área da Matemática que lida com objetos discretos, a saber, conjuntos, sequencias, listas, coleções ou quaisquer entidades *contáveis*. - Exemplo, $\mathbb{R}$ é incontável, ou não enumerável- Vários exemplos de contáveis: - O conjunto das vogais da língua portuguesa; - O conjunto dos times de futebol brasileiros da série A em 2020; - O conjunto de nomes das estações do ano; - O conjunto das personagens do quarteto do filme *Os Pinguins de Madagascar* e; - O conjunto dos números pares positivos menores ou iguais a dez. - Conjuntos denotados por *extensão*: quando listamos seus elementos- $\{ a, e, i, o, u \}$- $\{ \text{Atlético-PR}, \ldots, \text{Bahia}, \text{Botafogo}, \ldots, \text{Coritiba}, \ldots, \text{Fortaleza}, \ldots, \text{Internacional}, \ldots, \text{São Paulo}, \text{Sport}, \text{Vasco} \}$- $\{ \text{Primavera}, \text{Verão}, \text{Outono}, \text{Inverno}\}$- $\{ \text{Capitão}, \text{Kowalski}, \text{Recruta}, \text{Rico}\}$- $\{ 2, 4, 6, 8,10\}$ - Denotados por *compreensão*: quando usamos uma propriedade que distingue seus elementos. - $\{ c \in \mathbb{A} \, ; \, c \text{ é vogal} \}$- $\{ t \in \mathbb{T} \, ; \, t \text{ é da Série A} \}$- $\{ x \, ; \, x \text{ é uma estação do ano} \}$- $\{ p \, ; \, p \text{ é personagem do quarteto principal do filme Os Pinguins de Madagascar} \}$- $\{ e \, ; \, e \text{ é estação do ano} \}$- $\{ n \in \mathbb{Z} \, | \, n = 2k \wedge 2 \leq n \leq 10 \wedge k \in \mathbb{Z} \}$ Por livre conveniência: - $\mathbb{A}$ é o conjunto de todas as letras de nosso alfabeto- $\mathbb{T}$ é o conjunto de todos os times de futebol do Brasil. Estruturas de dados para objetos discretosAs principais que aprenderemos: - `list`: estrutura cujo conteúdo é modificável e o tamanho variável. Listas são caracterizadas por *mutabilidade* e *variabilidade*. Objetos `list` são definidos por um par de colchetes e vírgulas que separam seus elementos: `[., ., ... ,.]`.- `tuple`: estrutura cujo conteúdo não é modificável e o tamanho fixado. Tuplas são caracterizadas por *imutabilidade* e *invariabilidade*. Objetos `tuple` são definidos por um par de colchetes e vírgulas que separam seus elementos: `(., ., ... ,.)`. - `dict`: estruturas contendo uma coleção de pares do tipo *chave-valor*. Dicionários são caracterizados por *arrays associativos* (*tabelas hash*). Objetos `dict` são definidos por um par de chaves e agrupamentos do tipo `'chave':valor` (*key:value*), separados por vírgula: `{'chave1':valor1, 'chave2':valor2, ... ,'chaven':valorn}`. As chaves (*keys*) podem ser do tipo `int`, `float`, `str`, ou `tuple` ao passo que os valores podem ser de tipos arbitrários.- `set`: estruturas similares a `dict`, porém não possuem chaves e contêm objetos únicos. Conjuntos são caracterizadas por *unicidade* de elementos. Objetos `set` são definidos por um par de chaves e vírgulas que separam seus elementos: `{., ., ... ,.}`. ListasEstruturas `list` formam uma coleção de objetos arbitrários e podem ser criadas de modo sequenciado com operadores de pertencimento ou por expressões geradoras, visto que são estruturas iteráveis.
###Code
vogais = ['a','e','i','o','u'] # elementos são 'str'
vogais
times = ['Bahia', 'Sport', 'Fortaleza', 'Flamengo']
times
pares10 = [2,4,6,8,10]
pares10
mix = ['Bahia',24,6.54,[1,2]] # vários objetos na lista
mix
###Output
_____no_output_____
###Markdown
Listas por geração**Exemplo**: crie uma lista dos primeiros 100 inteiros não-negativos.
###Code
os_100 = range(100) # range é uma função geradora
print(list(os_100)) # casting com 'list'
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]
###Markdown
**Exemplo**: crie o conjunto $\{ x \in \mathbb{Z} \, ; \, -20 \leq x < 10 \}$
###Code
print(list(range(-20,10))) # print é usado para imprimir column-wise
###Output
[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
**Exemplo**: crie o conjunto $\{ x \in \mathbb{Z} \, ; \, -20 \leq x \leq 10 \}$
###Code
print(list(range(-20,11))) # para incluir 10, 11 deve ser o limite. Por quê?
###Output
[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
###Markdown
Adicionando e removendo elementosHá vários métodos aplicáveis para adicionar e remover elementos em listas. Adição por apensamentoAdiciona elementos por concatenação no final da lista.
###Code
times.append('Botafogo')
times
times.append('Fluminense')
times
###Output
_____no_output_____
###Markdown
Adição por extensão Para incluir elementos através de um objeto iterável, sequenciável, usamos `extend`.
###Code
falta = ['Vasco', 'Atlético-MG']
times.extend(falta) # usa outra lista pra estender a lista
times
###Output
_____no_output_____
###Markdown
Iteração e indexação- *Iterar* sobre uma lista é "passear" por seus elementos- Em Python, a indexação de listas vai de `0` a `n - 1`, onde `n` é o tamanho da lista. Por exemplo:$\text{posição} : \{p=0, p=1, \ldots, p={n-1}\}$$\text{elementos na lista} : [x_1, x_2, \ldots, x_{n}]$- Mesma idéia aplicável a qualquer coleção, sequencia ou objeto iterável. Remoção por índiceSuponha que tivéssemos criado a lista:
###Code
pares = [0,2,5,6] # 5 não é par
pares
###Output
_____no_output_____
###Markdown
Como 5 não é par, não deveria estar na lista. Para excluírmos um elemento em uma posição específica, usamos `pop` passando o *índice* onde o elemento está.
###Code
pares.pop(2) # o ímpar 5 está na posição 2 e NÃO 3!
pares
###Output
_____no_output_____
###Markdown
Adição por índiceNesta lista, podemos pensar em incluir 4 entre 2 e 6. Para isto, usamos `insert(posicao,valor)`, para `valor` na `posicao` desejada.
###Code
pares.insert(2,4) # 4 é inserido na posição de 6, que é deslocado
pares
###Output
_____no_output_____
###Markdown
Apagar conteúdo da listaPodemos apagar o conteúdo inteiro da lista com `clear`.
###Code
times.clear()
times # lista está vazia
###Output
_____no_output_____
###Markdown
Podemos contar o número de elementos da lista com `len`.
###Code
len(times) # verifica que a lista está vazia
type([]) # a lista é vazia, mas continua sendo lista
###Output
_____no_output_____
###Markdown
Outros métodos de lista Conte repetições de elementos na lista com `count`.
###Code
numeros = [1,1,2,3,1,2,4,5,6,3,4,4,5,5]
print( numeros.count(1), numeros.count(3), numeros.count(7) )
###Output
3 2 0
###Markdown
Localize a posição de um elemento com `index`.
###Code
numeros.index(5) # retorna a posição da primeira aparição
###Output
_____no_output_____
###Markdown
Remova a primeira aparição do elemento com `remove`.
###Code
numeros.remove(1) # perde apenas o primeiro
numeros
###Output
_____no_output_____
###Markdown
Faça uma reflexão ("flip") *in-place* (sem criar nova lista) da lista com `reverse`.
###Code
numeros.reverse()
numeros
###Output
_____no_output_____
###Markdown
Ordene a lista de maneira *in-place* (sem criar nova lista) com `sort`.
###Code
numeros.sort()
numeros
###Output
_____no_output_____
###Markdown
Concatenação de listasListas são concatenadas ("somadas") com `+`. Caso já possua listas definidas, use `extend`.
###Code
['Flamengo', 'Botafogo'] + ['Fluminense']
['Flamengo', 'Botafogo'] + 'Fluminense' # erro: 'Fluminense' não é list
times_nordeste = ['Fortaleza','Sport']
times_sul = ['Coritiba','Atlético-PR']
times_nordeste + times_sul
times_nordeste.extend(times_sul) # mesma coisa
times_nordeste
###Output
_____no_output_____
###Markdown
Fatiamento de listas O fatiamento ("slicing") permite que selecionemos partes da lista através do modelo `start:stop`, em que `start` é um índice incluído na iteração, e `stop` não.
###Code
letras = ['a','b','c','d','e','f','g']
letras[0:2]
letras[1:4]
letras[5:6]
letras[0:7] # toda a lista
###Output
_____no_output_____
###Markdown
Omissão de `start` e `stop`
###Code
letras[:3] # até 3, exclusive
letras[:5] # até 5, exclusive
letras[4:] # de 4 em diante
letras[6:] # de 6 em diante
###Output
_____no_output_____
###Markdown
Modo reverso
###Code
letras[-1] # último índice
letras[-2:-1] # do penúltimo ao último, exclusive
letras[-3:-1]
letras[-4:-2]
letras[-7:-1] # toda a lista
letras[-5:]
letras[:-3]
###Output
_____no_output_____
###Markdown
Elementos alternados com `step`Podemos usar um dois pontos duplo (`::`) para dar um "passo" de alternância.
###Code
letras[::2] # salta 2-1 intermediários
letras[::3] # salta 3-1 intermediários
letras[::7] # salto de igual tamanho
letras[::8] # salto além do tamanho
###Output
_____no_output_____
###Markdown
Mutabilidade de listasPodemos alterar o conteúdo de elementos diretamente por indexação.
###Code
from sympy.abc import x,y
ops = [x+y,x-y,x*y,x/y]
ops2 = ops.copy() # cópia de ops
ops
ops[0] = x-y
ops
ops[2] = x/y
ops
ops[1], ops[3] = x + y, x*y # mutação por desempacotamento
ops
ops[1:3] = [False, False, True] # mutação por fatiamento
ops
ops = ops2 # recuperando ops
ops
ops2 is ops
ops3 = [] # lista vazia
ops3
ops2 = ops + ops3 # concatenação cria uma lista nova
ops2
ops2 is ops # agora, ops2 não é ops
print(id(ops), id(ops2)) # imprime local na memória de ambas
ops2 == ops # todos os elementos são iguais
###Output
_____no_output_____
###Markdown
O teste de identidade é `False`, mas o teste de igualdade é `True`. **Exemplo:** Escreva uma função que calcule a área, perímetro, comprimento da diagonal, raio, perímetro e área do círculo inscrito, e armazene os resultados em uma lista.
###Code
# usaremos matemática simbólica
from sympy import symbols
from math import pi
# símbolos
B, H = symbols('B H',positive=True)
def propriedades_retangulo(B,H):
'''
A função assume que a base B
é maior do que a altura H. Senão,
as propriedades do círculo inscrito
não serão determinadas.
'''
d = (B**2 + H**2)**(1/2) # comprimento da diagonal
r = H/2 # raio do círculo inscrito
return [B*H, 2*(B+H), d, r, 2*pi*r, pi*(r)**2]
# lista de objetos símbolos
propriedades_retangulo(B,H)
# substituindo valores
B, H = 4.0, 2.5
propriedades_retangulo(B,H)
###Output
_____no_output_____
###Markdown
Formatação de stringsO *template* a seguir usa a função `format` para substituição de valores indexados.```pythontempl = '{0} {1} ... {n}'.format(arg0,arg1,...,argn)```**Nota:** Para ajuda plena sobre formatação, consultar: ```pythonhelp('FORMATTING')```
###Code
# considere R: retângulo; C: círculo inscrito
res = propriedades_retangulo(B,H) # resultado
props = ['Área de R',
'Perímetro de R',
'Diagonal de R',
'Raio de C',
'Perímetro de C',
'Área de C'
] # propriedades
# template
templ = '{0:s} = {1:.2f}\n\
{2:s} = {3:.3f}\n\
{4:s} = {5:.4f}\n\
{6:s} = {7:.5f}\n\
{8:s} = {9:.6f}\n\
{10:s} = {11:.7f}'.format(props[0],res[0],\
props[1],res[1],\
props[2],res[2],\
props[3],res[3],\
props[4],res[4],\
props[5],res[5])
# impressão formatada
print(templ)
###Output
Área de R = 10.00
Perímetro de R = 13.000
Diagonal de R = 4.7170
Raio de C = 1.25000
Perímetro de C = 7.853982
Área de C = 4.9087385
###Markdown
Como interpretar o que fizemos? - `{0:s}` formata o primeiro argumento de `format`, o qual é `props[0]`, como `str` (`s`).- `{1:.2f}` formata o segundo argumento de `format`, o qual é `res[0]`, como `float` (`f`) com duas casas decimais (`.2`).- `{3:.3f}` formata o quarto argumento de `format`, o qual é `res[1]`, como `float` (`f`) com três casas decimais (`.3`).A partir daí, percebe-se que um template `{X:.Yf}` diz para formatar o argumento `X` como `float` com `Y` casas decimais, ao passo que o template `{X:s}` diz para formatar o argumento `X` como `str`. Além disso, temos:- `\n`, que significa "newline", isto é, uma quebra da linha.- `\`, que é um *caracter de escape* para continuidade da instrução na linha seguinte. No exemplo em tela, o *template* criado é do tipo *multi-line*. **Nota:** a contrabarra em `\n` também é um caracter de escape e não um caracter *literal*. Isto é, para imprimir uma contrabarra literalmente, é necessário fazer `\\`. Vejamos exemplos de literais a seguir. Exemplos de impressão de caracteres literais
###Code
print('\\') # imprime contrabarra literal
print('\\\\') # imprime duas contrabarras literais
print('\'') # imprime plica
print('\"') # imprime aspas
###Output
\
\\
'
"
###Markdown
f-stringsTemos uma maneira bastante interessante de criar templates usando f-strings, que foi introduzida a partir da versão Python 3.6. Com f-strings a substituição é imediata.
###Code
print(f'{props[0]} = {res[0]}') # estilo f-string
###Output
Área de R = 10.0
###Markdown
Estilos de formataçãoVeja um comparativo de estilos:
###Code
print('%s = %f ' % (props[0], res[0])) # Python 2
print('{} = {}'.format(props[0], res[0])) # Python 3
print('{0:s} = {1:.4f}'.format(props[0], res[0])) # Python 3 formatado
###Output
Área de R = 10.000000
Área de R = 10.0
Área de R = 10.0000
###Markdown
**Exemplo:** Considere o conjunto: V = $\{ c \in \mathbb{A} \, ; \, c \text{ é vogal} \}.$ Crie a concatenação de todos os elementos com f-string.
###Code
V = ['a','e','i','o','u']
V
f'{V[0]}{V[1]}{V[2]}{V[3]}{V[4]}' # pouco Pythônico
###Output
_____no_output_____
###Markdown
Veremos à frente meios mais elegantes de fazer coisas similares. Controle de fluxo: laço `for`Em Python, podemos realizar iterar por uma coleção ou iterador usando *laços*. Introduziremos aqui o laço `for`. Em Python, o bloco padrão para este laço é dado por: ```pythonfor valor in sequencia: faça algo com valor```Acima, `valor` é um iterador.
###Code
for v in vogais: # itera sobre lista inteira
print(v)
for v in vogais[0:3]: # itera parcialmente
print(v + 'a')
for v in vogais[-2:]:
print(f'{v*10}')
###Output
oooooooooo
uuuuuuuuuu
###Markdown
Compreensão de listaUsando `for`, a criação de listas torna-se bastante facilitada. **Exemplo:** crie a lista dos primeiros 10 quadrados perfeitos.
###Code
Q = [q*q for q in range(1,11)]
Q
###Output
_____no_output_____
###Markdown
A operação acima equivale a:
###Code
Q2 = []
for q in range(1,11):
Q2.append(q*q)
Q2
###Output
_____no_output_____
###Markdown
**Exemplo:** crie a PA: $a_n = 3 + 6(n-1), \, 1 \leq n \leq 10$
###Code
PA = [3 + 6*(n-1) for n in range(1,11) ]
PA
###Output
_____no_output_____
###Markdown
**Exemplo:** se $X = \{1,2,3\}$ e $Y=\{4,5,6\}$, crie a "soma" $X + Y$ elemento a elemento.
###Code
X = [1,2,3]
Y = [4,5,6]
XsY = [ X[i] + Y[i] for i in range(len(X)) ]
XsY
###Output
_____no_output_____
###Markdown
**Exemplo:** se $X = \{1,2,3\}$ e $Y=\{4,5,6\}$, cria o "produto" $X * Y$ elemento a elemento.
###Code
XpY = [ X[i]*Y[i] for i in range(len(X)) ]
XpY
from sympy import lambdify
from sympy.abc import x
for i,v in enumerate(XpY)
lambdify(x,'x**2')
###Output
_____no_output_____
###Markdown
TuplasTuplas são são sequencias imutáveis de tamanho fixo. Em Matemática, uma tupla é uma sequência ordenada de elementos. Em geral, o termo $n-$upla ("ênupla") é usado para se referir a uma tupla com $n$ elementos.
###Code
par = 1,2; par
type(par)
trio = (1,2,3); trio
quad = (1,2,3,4); quad
nome = 'Nome'; tuple(nome) # casting
###Output
_____no_output_____
###Markdown
Tuplas são acessíveis por indexação.
###Code
quad[2]
quad[1:4]
quad[3] = 5 # tuplas não são mutáveis
###Output
_____no_output_____
###Markdown
Se na tupla houver uma lista, a lista é modificável.
###Code
super_trio = tuple([1,[2,3],4]) # casting
super_trio
super_trio[1].extend([4,5])
super_trio
###Output
_____no_output_____
###Markdown
Tuplas também são concatenáveis com `+`.
###Code
(2,3) + (4,3)
('a',[1,2],(1,1)) # repetição
###Output
_____no_output_____
###Markdown
Desempacotamento de tuplas
###Code
a,b,c,d = (1,2,3,4)
for i in [a,b,c,d]:
print(i) # valor das variáveis
a,b = (1,2)
a,b = b,a # troca de valores
a,b
###Output
_____no_output_____
###Markdown
`enumerate`Podemos controlar índice e valor ao iterar em uma sequencia.
###Code
X = [1,2,3] # lista / sequencia
for i,x in enumerate(X): # (i,x) é uma tupla (índice,valor)
print(f'{i} : {x}')
###Output
0 : 1
1 : 2
2 : 3
###Markdown
**Exemplo:** Construa o produto cartesiano $$A \times B = \{(a,b) \in \mathbb{Z} \times \mathbb{Z} \, ; \, -4 \leq a \leq 4 \wedge 3 \leq b \leq 7\}$$
###Code
AB = [(a,b) for a in range(-4,5) for b in range(3,8)]
print(AB)
###Output
[(-4, 3), (-4, 4), (-4, 5), (-4, 6), (-4, 7), (-3, 3), (-3, 4), (-3, 5), (-3, 6), (-3, 7), (-2, 3), (-2, 4), (-2, 5), (-2, 6), (-2, 7), (-1, 3), (-1, 4), (-1, 5), (-1, 6), (-1, 7), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (1, 3), (1, 4), (1, 5), (1, 6), (1, 7), (2, 3), (2, 4), (2, 5), (2, 6), (2, 7), (3, 3), (3, 4), (3, 5), (3, 6), (3, 7), (4, 3), (4, 4), (4, 5), (4, 6), (4, 7)]
###Markdown
DicionáriosDicionários, ou especificamente, objetos `dict`, possuem extrema versatilidade e são muito poderosos. Criamos um `dict` por diversas formas. A mais simples é usar chaves e pares explícitos.
###Code
d = {} # dict vazio
d
type(d)
###Output
_____no_output_____
###Markdown
Os pares chave-valor incorporam quaisquer tipos de dados.
###Code
d = {'par': [0,2,4,6,8], 'ímpar': [1,3,5,7,9], 'nome':'Meu dict', 'teste': True}
d
###Output
_____no_output_____
###Markdown
Acesso a conteúdo Para acessar o conteúdo de uma chave, indexamos pelo seu nome.
###Code
d['par']
d['nome']
###Output
_____no_output_____
###Markdown
**Exemplo:** construindo soma e multiplicação especial.
###Code
# dict
op = {'X' :[1,2,3], 'delta' : 0.1}
# função
def sp(op):
s = [x + op['delta'] for x in op['X']]
p = [x * op['delta'] for x in op['X']]
return (s,p) # retorna tupla
soma, prod = sp(op) # desempacota
for i,s in enumerate(soma):
print(f'pos({i}) | Soma = {s} | Prod = {prod[i]}')
###Output
pos(0) | Soma = 1.1 | Prod = 0.1
pos(1) | Soma = 2.1 | Prod = 0.2
pos(2) | Soma = 3.1 | Prod = 0.30000000000000004
###Markdown
Inserção de conteúdo
###Code
# apensa variáveis
op[1] = 3
op['novo'] = (3,4,1)
op
###Output
_____no_output_____
###Markdown
Alteração de conteúdo
###Code
op['novo'] = [2,1,4] # sobrescreve
op
###Output
_____no_output_____
###Markdown
Deleção de conteúdo com `del` e `pop`
###Code
del op[1] # deleta chave
op
novo = op.pop('novo') # retorna e simultaneamente deleta
novo
op
###Output
_____no_output_____
###Markdown
Listagem de chaves e valoresUsamos os métodos `keys()` e `values()` para listar chaves e valores.
###Code
arit = {'soma': '+', 'subtr': '-', 'mult': '*', 'div': '/'} # dict
k = list(arit.keys())
print(k)
val = list(arit.values())
print(val)
for v in range(len(arit)):
print(f'A operação \'{k[v]}\' de "arit" usa o símbolo \'{val[v]}\'.')
###Output
['soma', 'subtr', 'mult', 'div']
['+', '-', '*', '/']
A operação 'soma' de "arit" usa o símbolo '+'.
A operação 'subtr' de "arit" usa o símbolo '-'.
A operação 'mult' de "arit" usa o símbolo '*'.
A operação 'div' de "arit" usa o símbolo '/'.
###Markdown
Combinando dicionáriosUsamos `update` para combinar dicionários. Este método possui um resultado similar a `extend`, usado em listas.
###Code
pot = {'pot': '**'}
arit.update(pot)
arit
###Output
_____no_output_____
###Markdown
Dicionários a partir de sequenciasPodemos criar dicionários a partir de sequencias existentes usando `zip`.
###Code
arit = ['soma', 'subtr', 'mult', 'div', 'pot']
ops = ['+', '-', '*', '/', '**']
dict_novo = {}
for chave,valor in zip(arit,ops):
dict_novo[chave] = valor
dict_novo
###Output
_____no_output_____
###Markdown
Visto que um `dict` é composto de várias tuplas de 2, podemos criar um de maneira ainda mais simples.
###Code
dict_novo = dict(zip(arit,ops))
dict_novo
###Output
_____no_output_____
###Markdown
*Hashability*Dissemos acima que os valores de um `dict` podem ser qualquer objeto Python. Porém, as chaves estão limitadas por uma propriedade chamada *hashability*. Um objeto *hashable* em geral é imutável. Para saber se um objeto pode ser usado como chave de um `dict`, use a função `hash`. Caso retorne erro, a possibilidade de *hashing* é descartada.
###Code
# todos aqui são imutáveis, portanto hashable'
hash('s'), hash(2), hash(2.1), hash((1,2))
# não hashable
hash([1,2]), hash((1,2),[3,4])
###Output
_____no_output_____
###Markdown
Para usar `list` como chave, podemos convertê-las em `tuple`.
###Code
d = {}; d[tuple([1,2])] = 'hasheando lista em tupla'; d
###Output
_____no_output_____
###Markdown
Compreensão de dicionárioPodemos usar `for` para criar dicionários de maneira esperta do mesmo modo que as compreensões de lista com a distinção de incluir pares chaves/valor.
###Code
{chave:valor for chave,valor in enumerate(arit)} # chave:valor
{valor:chave for chave,valor in enumerate(arit)} # valor:chave
###Output
_____no_output_____ |
Past/DSS/Classification/09_0.Entropy.ipynb | ###Markdown
엔트로피1. 확률변수 Y가 이산 확률변수1. K는 X가 가질 수 있는 클래스의 수1. $P(y)$는 확률 질량 함수$$H[Y] = -\sum_{k=1}^K p(y_k) \log_2 p(y_k)$$Y가 연속일 경우$$H[Y] = -\int p(y) \log_2 p(y) \; dy$$$p(y) = 0 $인 경우 다음과 같은 극한값 사용- 로피탈 정리$$\lim_{p\rightarrow 0} \; p\log_2{p} = 0$$ 엔트로피의 성질만약 확률분포가 결정론적이면(즉, 특정 하나의 값이 나올 확률이 1이라면) 엔프로피는 최솟값인 0엔트로피의 최댓값은 이산 확률변수의 클래스의 갯수에 따라 달라짐.- 클래스가 $2^K$개이고 각 클래스가 모두 같은 확률을 가질 경우$$H = -\frac{2^K}{2^K}\log_2\dfrac{1}{2^K} = K$$ 엔트로피와 정보량엔트로피 = 확률변수가 담을 수 있는 정보량= 확률변수의 표본값을 관측해서 얻을 수 있는 정보의 종류1. 엔트로피가 0 이면,1. 확률 변수는 결정론적 1. 확률 변수의 표본값이 변화하지 않음1. 따라서 표본값을 관측한다고 해도 추가 정보를 얻을 수 없음>1. 엔트로피가 크다면,1. 확률 변수의 표본값이 가질 수 있는 실질 경우의 수가 증가1. 표본값이 가진 정보량이 많음 표본 데이터가 주어진 경우확률 변수 모형, 즉 이론적인 확률 밀도(질량) 함수가 아닌 실제 데이터가 주어진 경우에는 확률질량함수를 추정하여 엔트로피를 계산한다.예를 들어 데이터가 모두 80개가 있고 그 중 Y = 0 인 데이터가 40개, Y = 1인 데이터가 40개 있는 경우는 엔트로피가 1이다
###Code
- 1 / 2 * np.log2(1 / 2) - 1 / 2 * np.log2(1 / 2)
###Output
_____no_output_____
###Markdown
만약 데이터가 모두 60개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 40개 있는 경우는 엔트로피가 약 0.92이다.$$P(y=0) = \dfrac{20}{60} = \dfrac{1}{3}$$$$P(y=1) = \dfrac{40}{60} = \dfrac{2}{3}$$$$H[Y] = -\dfrac{1}{3}\log_2\left(\dfrac{1}{3}\right) -\dfrac{2}{3}\log_2\left(\dfrac{2}{3}\right) = 0.92$$
###Code
-1 / 3 * np.log2(1 / 3) - 2 / 3 * np.log2(2 / 3)
###Output
_____no_output_____ |
Lab3/Lab3-Stats.ipynb | ###Markdown
Sampling and Distributions
###Code
# The %... is an iPython thing, and is not part of the Python language.
# In this case we're just telling the plotting library to draw things on
# the notebook, instead of on a separate window.
%matplotlib inline
# See all the "as ..." contructs? They're just aliasing the package names.
# That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot().
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import time
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
###Output
_____no_output_____
###Markdown
Expectations and VarianceThe **expectation value** of a quantity with respect to the a distribution is the weighted sum of the quantity where the weights are probabilties from the distribution. For example, for the random variable $X$:$$E_p[X] = \sum_x x\,p(x).$$$E_p[X]$ if often just called the expectation value of the distribution. This definition is analogous to the one for the arithmetic mean of a dataset: the only difference is that we want to give more weight to more probable values.The variance of a distribution is defined analogous to that of a dataset:$$V_p[X] = E_p[(X-E_p[X])^2]$$.For the Bernoulli distribution $p(x)=p=constant$, and you are summing it over ones as opposed to 0's, so the mean is just p. The variance is $(1-p)^2\times p +(-p)^2\times (1-p) = p(1-p)(1-p+p) = p(1-p)$.In general, we can find this mean that by obtaining a large bunch of samples from the distribution and find their arithmetic mean. The justification for this is the Law of large numbers, which we'll come to soon. However the intuition is obvious: for a large number of samples, the frequencies will tract probabilities well, so high probability samples with roughly the same value will re-occur, and a simple arithmetic sun will capture the curves of the distribution. The Law of Large NumbersLets keep increasing the length of the sequence of coin flips n, and compute a running average $S_n$ of the coin-flip random variables,$$S_n = \frac{1}{n} \sum_{i=1}^{n} x_i .$$We plot this running mean, and notice that it converges to the mean of the distribution from which the random variables are plucked, ie the Bernoulli distribution with p=0.5.
###Code
from scipy.stats.distributions import bernoulli
def throw_a_coin(n):
brv = bernoulli(0.5)
return brv.rvs(size=n)
random_flips = throw_a_coin(10000)
running_means = np.zeros(10000)
sequence_lengths = np.arange(1,10001,1)
for i in sequence_lengths:
running_means[i-1] = np.mean(random_flips[:i])
plt.plot(sequence_lengths, running_means);
plt.xscale('log')
###Output
_____no_output_____
###Markdown
This is an example of a very important theorem in statistics, the law of large numbers, which says this:**Let $x_1,x_2,...,x_n$ be a sequence of independent, identically-distributed (IID) random variables. Suppose that $X$ has the finite mean $\mu$. Then the average of the first n of them:**$$S_n = \frac{1}{n} \sum_{i=1}^{n} x_i ,$$**converges to the mean of the variables $\mu$ as $n \to \infty$:**$$ S_n \to \mu \, as \, n \to \infty. $$The law of large numbers is what makes the **frequentist** interpretation of probability possible. For consider any event $E$ from a probability distribution with random variable Y, and consider the indicator function $I_E$ such that:\begin{eqnarray*}I_E(y) = 1 \,&& if \, y \in E\\I_E(y) = 0 \,&& otherwise\end{eqnarray*}The variable $Z=I_E(Y)$ is now Bernoulli random variable with parameter and thus p = P(E). Now if we take a long sequence from $Y$ and thus $Z$, then the frequency of successes (where success means being in E) will converge by the law of large numbers to the true probability p. Having now established something about long sequences of random variables, lets turn to samples from the population of random numbers. Samples from a population of coin flipsLets redo the experiment with coin flips that we started in the previous lab. We'll establish some terminology at first. What we did there was to do a large set of replications M, in each of which we did many coin flips N. We'll call the result of each coin flip an observation, and a single replication a sample of observations. Thus the number of samples is M, and the sample size is N. These samples have been chosen from a population of size $n >> N$.We show the mean over the observations, or sample mean, for a sample size of 10, with 20 replications. There are thus 20 means.
###Code
def make_throws(number_of_samples, sample_size):
start=np.zeros((number_of_samples, sample_size), dtype=int)
for i in range(number_of_samples):
start[i,:]=throw_a_coin(sample_size)
return np.mean(start, axis=1)
make_throws(number_of_samples=20, sample_size=10)
###Output
_____no_output_____
###Markdown
Let us now do 200 replications, each of which has a sample size of 1000 flips, and store the 200 means for each sample zise from 1 to 1000 in `sample_means`.
###Code
sample_sizes=np.arange(1,1001,1)
sample_means = [make_throws(number_of_samples=200, sample_size=i) for i in sample_sizes]
###Output
_____no_output_____
###Markdown
Lets formalize what we are up to. Lets call the N random variables in the $m^{th}$ sample $x_{m1},x_{m2},...,x_{mN}$ and lets define the sample mean$$\bar{x_m}(N) = \frac{1}{N}\, \sum_{i=1}^{N} x_{mi} $$Now imagine the size of the sample becoming large, asymptoting to the size of an infinite or very large population (ie the sample becomes the population). Then you would expect the sample mean to approach the mean of the population distribution. This is just a restatement of the law of large numbers.Of course, if you drew many different samples of a size N (which is not infinite), the sample means $\bar{x_1}$, $\bar{x_2}$, etc would all be a bit different from each other. But the law of large numbers intuitively indicates that as the sample size gets very large and becomes an infinite population size, these slightly differeing means would all come together and converge to the population (or distribution) mean.To see this lets define, instead, the mean or expectation of the sample means over the set of samples or replications, at a sample size N:$$E_{\{R\}}(\bar{x}) = \frac{1}{M} \,\sum_{m=1}^{M} \bar{x_m}(N) ,$$where $\{R\}$ is the set of M replications, and calculate and plot this quantity.
###Code
mean_of_sample_means = [np.mean(means) for means in sample_means]
plt.plot(sample_sizes, mean_of_sample_means);
plt.ylim([0.480,0.520]);
###Output
_____no_output_____
###Markdown
Not surprisingly, the mean of the sample means converges to the distribution mean as the sample size N gets very large. The notion of a Sampling DistributionIn data science, we are always interested in understanding the world from incomplete data, in other words from a sample or a few samples of a population at large. Our experience with the world tells us that even if we are able to repeat an experiment or process, we will get more or less different answers the next time. If all of the answers were very different each time, we would never be able to make any predictions.But some kind of answers differ only a little, especially as we get to larger sample sizes. So the important question then becomes one of the distribution of these quantities from sample to sample, also known as a **sampling distribution**. Since, in the real world, we see only one sample, this distribution helps us do **inference**, or figure the uncertainty of the estimates of quantities we are interested in. If we can somehow cook up samples just somewhat different from the one we were given, we can calculate quantities of interest, such as the mean on each one of these samples. By seeing how these means vary from one sample to the other, we can say how typical the mean in the sample we were given is, and whats the uncertainty range of this quantity. This is why the mean of the sample means is an interesting quantity; it characterizes the **sampling distribution of the mean**, or the distribution of sample means.We can see this mathematically by writing the mean or expectation value of the sample means thus:$$E_{\{R\}}(N\,\bar{x}) = E_{\{R\}}(x_1 + x_2 + ... + x_N) = E_{\{R\}}(x_1) + E_{\{R\}}(x_2) + ... + E_{\{R\}}(x_N)$$Now in the limit of a very large number of replications, each of the expectations in the right hand side can be replaced by the population mean using the law of large numbers! Thus:\begin{eqnarray*}E_{\{R\}}(N\,\bar{x}) &=& N\, \mu\\E(\bar{x}) &=& \mu\end{eqnarray*}which tells us that in the limit of a large number of replications the expectation value of the sampling means converges to the population mean. This limit gives us the true sampling distribution, as opposed to what we might estimate from our finite set of replicates. The sampling distribution as a function of sample sizeWe can see what the estimated sampling distribution of the mean looks like at different sample sizes.
###Code
sample_means_at_size_10=sample_means[9]
sample_means_at_size_100=sample_means[99]
sample_means_at_size_1000=sample_means[999]
plt.hist(sample_means_at_size_10, bins=np.arange(0,1,0.01), alpha=0.5);
plt.hist(sample_means_at_size_100, bins=np.arange(0,1,0.01), alpha=0.4);
plt.hist(sample_means_at_size_1000, bins=np.arange(0,1,0.01), alpha=0.3);
###Output
_____no_output_____
###Markdown
The distribution is much tighter at large sample sizes, and that you can have way low and way large means at small sample sizes. Indeed there are means as small as 0.1 at a sample size of 10, and as small as 0.3 at a sample size of 100. Lets plot the distribution of the mean as a function of sample size.
###Code
for i in sample_sizes:
if i %50 ==0 and i < 1000:
plt.scatter([i]*200, sample_means[i], alpha=0.03);
plt.xlim([0,1000])
plt.ylim([0.25,0.75]);
###Output
_____no_output_____
###Markdown
The kidney cancer case: higher variability at the extremesThe diagram above has a tell-tale triangular shape with high and low means, and thus much larger variability at lower sample sizes.Consider the example of kidney cancers in various US counties from the lecture. Imagine that we have a statistical model or story for the occurence of kidney cancer. Let us think of each county as a sample in the population of kidney cancers, with the observations the per year occurence of cancer in that county. Then the low-population counties represent small size samples. The cancer rate in that county then is the sample mean of the cancer rates over multiple years in that county.Let us plot the incidence of kidney cancer against the size of the county:(diagram taken from http://faculty.cord.edu/andersod/MostDangerousEquation.pdf , a very worth reading aticle)We can see the entire pattern of low and high cancer rates in some parts of the country can entirely be explained from the smallness of the sample sizes: in a county of 1000 people, one cancer is a rate too high, for example. At the left end of the graph the cancer rate varies from 20 per 100,000 to 0. And the problem, as can be seen from the graph is onviously more acute at the upper end for the above reason. On the right side of the graph, there is very little variation, with all counties at about 5 cases per 100,000 of population.We'd obviously like to characterize mathematically the variability in the distribution of sample means as a function of the sample size. The variation of the sample meanLet the underlying distribution from which we have drawn our samples have, additionally to a well defined mean $\mu$, a well defined variance $\sigma^2$. ^[The Cauchy distribution, as you know, is a well defined exception with ill defined mean and variance].Then, as before:$$V_{\{R\}}(N\,\bar{x}) = V_{\{R\}}(x_1 + x_2 + ... + x_N) = V_{\{R\}}(x_1) + V_{\{R\}}(x_2) + ... + V_{\{R\}}(x_N)$$Now in the limit of a very large number of replications, each of the variances in the right hand side can be replaced by the population variance using the law of large numbers! Thus:\begin{eqnarray*}V_{\{R\}}(N\,\bar{x}) &=& N\, \sigma^2\\V(\bar{x}) &=& \frac{\sigma^2}{N}\end{eqnarray*}This simple formula is called **De-Moivre's** formula, and explains the tell-tale triangular plots we saw above, with lots of variation at low sample sizes turning into a tight distribution at large sample size(N).The square root of $V$, or the standard deviation of the sampling distribution of the mean (in other words, the distribution of sample means) is also called the **Standard Error**.We can obtain the standard deviation of the sampling distribution of the mean at different sample sizes and plot it against the sample size, to confirm the $1/\sqrt(N)$ behaviour.
###Code
std_of_sample_means_1000 = [np.std(means) for means in sample_means_1000_replicates]
plt.plot(np.log10(sample_sizes), np.log10(std_of_sample_means_1000));
###Output
_____no_output_____
###Markdown
Let us plot again the distribution of sample means at a large sample size, $N=1000$. What distribution is this?
###Code
plt.hist(sample_means_at_size_1000, bins=np.arange(0.4,0.6,0.002));
###Output
_____no_output_____
###Markdown
Lets step back and try and think about what this all means. As an example, say I have a weight-watchers' study of 1000 people, whose average weight is 150 lbs with standard deviation of 30lbs. If I was to randomly choose many samples of 100 people each, the mean weights of those samples would cluster around 150lbs with a standard error of 30/$\sqrt{100}$ = 3lbs. Now if i gave you a different sample of 100 people with an average weight of 170lbs, this weight would be more than 6 standard errors beyond the population mean, ^[this example is motivated by the crazy bus example in Charles Whelan's excellent Naked Statistics Book] and would thus be very unlikely to be from the weight watchers group. The Gaussian DistributionWe saw in the last section that the sampling distribution of the mean itself has a mean $\mu$ and variance $\frac{\sigma^2}{N}$. This distribution is called the **Gaussian** or **Normal Distribution**, and is probably the most important distribution in all of statistics.The probability density of the normal distribution is given as:$$ N(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2s^2} } .$$ The expected value of the Gaussian distribution is $E[X]=\mu$ and the variance is $Var[X]=s^2$.
###Code
norm = sp.stats.norm
x = np.linspace(-5,5, num=200)
fig = plt.figure(figsize=(12,6))
for mu, sigma, c in zip([0.5]*3, [0.2, 0.5, 0.8], colors):
plt.plot(x, norm.pdf(x, mu, sigma), lw=2,
c=c, label = r"$\mu = {0:.1f}, \sigma={1:.1f}$".format(mu, sigma))
plt.fill_between(x, norm.pdf(x, mu, sigma), color=c, alpha = .4)
plt.xlim([-5,5])
plt.legend(loc=0)
plt.ylabel("PDF at $x$")
plt.xlabel("$x$")
###Output
_____no_output_____
###Markdown
The Central Limit TheoremThe reason for the distribution's importance is the Central Limit Theorem(CLT). The theorem is stated as thus, very similar to the law of large numbers:**Let $x_1,x_2,...,x_n$ be a sequence of independent, identically-distributed (IID) random variables from a random variable $X$. Suppose that $X$ has the finite mean $\mu$ AND finite variance $\sigma^2$. Then the average of the first n of them:**$$S_n = \frac{1}{n} \sum_{i=1}^{n} x_i ,$$**converges to a Gaussian Random Variable with mean $\mu$ and variance $\sigma^2/n$ as $n \to \infty$:**$$ S_n \sim N(\mu,\frac{\sigma^2}{n}) \, as \, n \to \infty. $$In other words:$$s^2 = \frac{\sigma^2}{N}.$$This is true, *regardless* of the shape of $X$, which could be binomial, poisson, or any other distribution. Strictly speaking, under some conditions called Lyapunov conditions, the variables $x_i$ dont have to be identically distributed, as long as $\mu$ is the mean of the means and $\sigma^2$ is the sum of the individual variances. This has major consequences, for the importance of this theorem.Many random variables can be thought of as having come from the sum of a large number of small and independent effects. For example human height or weight can be thought of as the sum as a large number of genetic and environmental factors, which add to increase or decrease height or weight respectively. Or think of a measurement of a height. There are lots of ways things could go wrong: frayed tapes, stretched tapes, smudged marks, bad lining up of the eye, etc. These are all independent and have no systematic error in one direction or the other.Then the sum of these factors, as long as there are a large number of them, will be distributed as a gaussian.[At this point you are probably wondering: what does this have to do with the sampling distribution of the mean? We shall come to that, but in the meanwhile, lets consider some other key applications of the CLT.]As a rule of thumb, the CLT starts holding at $N \sim 30$. An application to elections: Binomial distribution in the large n, large k limitFor example, consider the binomial distribution Binomial(n,k, p) in the limit of large n. The number of successes k in n trials can be ragarded as the sum of n IID Bernoulli variables with values 1 or 0. Obviously this is applicable to a large sequence of coin tosses, or to the binomial sampling issue that we encountered earlier in the case of the polling. Using the CLT we can replace the binomial distribution at large n by a gaussian where k is now a continuous variable, and whose mean is the mean of the binomial $np$ and whose variance is $np(1-p)$, since$$S_n \sim N(p, \frac{p(1-p)}{n}).$$The accuracy of this approximation depends on the variance. A large variance makes for a broad distribution spanning many discrete k, thus justifying the transition from a discrete to a continuous distribution.This approximation is used a lot in studying elections. For example, suppose I told you that I'd polled 1000 people in Ohio and found that 600 would vote Democratic, and 400 republican. Imagine that this 1000 is a "sample" drawn from the voting "population" of Ohio. Assume then that these are 1000 independent bernoulli trials with p=600/1000 = 0.6. Then we can say that, from the CLT, the mean of the sampling distribution of the mean of the bernoulli or equivalently the binomial is 0.6, with a variance of $0.6*0.4/1000 = 0.00024$. Thus the standard deviation is 0.015 for a mean of 0.6, or 1.5% on a mean of 60% voting Democratic. This 1.5% if part of what pollsters quote as the margin of error of a candidates winning; they often include other factors such as errors in polling methodology.If one has results from multiple pollsters, one can treat them as independent samples from the voting population. Then the average from these samples will approach the average in the population, with the sample means distributed normally around it. What does this all mean?The sample mean, or mean of the random variables $x_{mi}$ in the sample $m$, has a sampling distribution with mean $\mu$ and variance $\frac{\sigma^2}{N}$, as shown before. Now for large sample sizes we can go further and use the CLT theorem to say that this distribution is the normal distribution,$$S_N \sim N(\mu, \frac{\sigma^2}{N})$$.The preciseness of saying that we have a gaussian is a huge gain in our expository power. For example, for the case of the weight-watchers program above, a separation of 20lbs is more than 3 standard errors away, which corresponds to being way in the tail of a gaussian distribution. Because we can now quantify the area under the curve, we can say that 99.7\% of the sample means lie within 9lbs of 150. Thus you can way easily reject the possibility that the new sample is from the weight-watchers program with 99.7\% confidence. Indeed, the CLT allows us to take the reduction in variance we get from large samples, and make statements in different cases that are quite strong:1. if we know a lot about the population, and randomly sampled 100 points from it, the sample mean would be with 99.7\% confidence within $0.3\sigma$ of the population mean. And thus, if $\sigma$ is small, the sample mean is quite representative of the population mean.2. The reverse: if we have a well sampled 100 data points, we could make strong statements about the population as a whole. This is indeed how election polling and other sampling works. (ADD MORE about what sample size is enough).3. we can infer, as we just did, if a sample is consistent with a population4. by the same token, you can compare two samples and infer if they are from the same population. The sampling distribution of the VarianceAt this point you might be curious about what the sampling distribution of the variance looks like, and what can we surpise from it about the variance of the entire sample. We can do this, just like we did for the means. We'll stick with a high number of replicates and plot the mean of the sample variances as well as the truish sampling distribution of the variances at a sample size of 100.
###Code
def make_throws_var(number_of_samples, sample_size):
start=np.zeros((number_of_samples, sample_size), dtype=int)
for i in range(number_of_samples):
start[i,:]=throw_a_coin(sample_size)
return np.var(start, axis=1)
sample_vars_1000_replicates = [make_throws_var(number_of_samples=1000, sample_size=i) for i in sample_sizes]
mean_of_sample_vars_1000 = [np.mean(vars) for vars in sample_vars_1000_replicates]
plt.plot(sample_sizes, mean_of_sample_vars_1000);
plt.xscale("log");
###Output
_____no_output_____
###Markdown
The "mean sample variance" asymptotes to the true variance of 0.25 by a sample size of 100. How well does the sample variance estimate the true variance? Notice that the histogram above ends at 0.25, rather than having ANY frequency at 0.25. What gives?If $V_m$ denotes the variance of a sample, $$ N\,V_m = \sum_{i=1}^{N} (x_{mi} - \bar{x_m})^2 = \sum_{i=1}^{N}(x_{mi} - \mu)^2 - N\,(\bar{x_m} - \mu)^2. $$Then$$E_{\{R\}}(N\,V_m) = E_{\{R\}}(\sum_{i=1}^{N}(x_{mi} - \mu)^2) - E_{\{R\}}(N\,(\bar{x_m} - \mu)^2)$$In the asymptotic limit of a very large number of replicates, we can then write$$E(N\,V) = N\,\sigma^2 - \sigma^2, $$and thus we have$$E(V) = \frac{N-1}{N} \,\sigma^2$$.In other words, the expected value of the sample variance is LESS than the actual variance. This should not be surprising: consider for example a sample of size 1 from the population. There is zero variance! More genrally, whenever you sample a population, you tend to pick the more likely members of the population, and so the variance in the sample is less than the variance in the population.An interesting application of this idea, as Shalizi points out in http://www.stat.cmu.edu/~cshalizi/ADAfaEPoV/, is that the loss of variability due to sampling of genes is indeed the origin of genetic drift. More prosaically, the fact that the above graph of expected sample variance against sample size asymptotes to 0.25 is as $\frac{N-1}{N}$ if very close to 1 at large N. Or put another way, you ought to correct your sample variances by a factor of $\frac{n}{n-1}$ to estimate the population variance, which itself works as the sampling distribution of the sample variance is rather tight, as seen below.
###Code
plt.hist(sample_vars_1000_replicates[99], bins=np.arange(0.2,0.26,0.001), alpha=0.2, normed=True);
###Output
_____no_output_____
###Markdown
An application: Gallup Party Affiliation PollEarlier we had used the Predictwise probabilities from Octover 12th to create a predictive model for the elections. This time we will try to **estimate** our own win probabilities to plug into our predictive model.We will start with a simple forecast model. We will try to predict the outcome of the election based the estimated proportion of people in each state who identify with one one political party or the other.Gallup measures the political leaning of each state, based on asking random people which party they identify or affiliate with. [Here's the data](http://www.gallup.com/poll/156437/heavily-democratic-states-concentrated-east.aspx2) they collected from January-June of 2012:
###Code
gallup_2012=pd.read_csv("g12.csv").set_index('State')
gallup_2012["Unknown"] = 100 - gallup_2012.Democrat - gallup_2012.Republican
gallup_2012.head()
###Output
_____no_output_____
###Markdown
Each row lists a state, the percent of surveyed individuals who identify as Democrat/Republican, the percent whose identification is unknown or who haven't made an affiliation yet, the margin between Democrats and Republicans (`Dem_Adv`: the percentage identifying as Democrats minus the percentage identifying as Republicans), and the number `N` of people surveyed. The most obvious source of error in the Gallup data is the finite sample size -- Gallup did not poll *everybody* in America, and thus the party affilitions are subject to sampling errors. How much uncertainty does this introduce? Lets estimate the sampling error using what we learnt in the last section
###Code
gallup_2012["SE_percentage"]=100.0*np.sqrt((gallup_2012.Democrat/100.)*((100. - gallup_2012.Democrat)/100.)/(gallup_2012.N -1))
gallup_2012.head()
###Output
_____no_output_____
###Markdown
On their [webpage](http://www.gallup.com/poll/156437/heavily-democratic-states-concentrated-east.aspx2) discussing these data, Gallup notes that the sampling error for the states is between 3 and 6%, with it being 3% for most states. This is more than what we find, so lets go with what Gallup says.We now use Gallup's estimate of 3% to build a Gallup model with some uncertainty. We will, using the CLT, assume that the sampling distribution of the Obama win percentage is a gaussian with mean the democrat percentage and standard error the sampling error of 3\%. We'll build the model in the function `uncertain_gallup_model`, and return a forecast where the probability of an Obama victory is given by the probability that a sample from the `Dem_Adv` Gaussian is positive.To do this we simply need to find the area under the curve of a Gaussian that is on the positive side of the x-axis.The probability that a sample from a Gaussian with mean $\mu$ and standard deviation $\sigma$ exceeds a threhold $z$ can be found using the the Cumulative Distribution Function of a Gaussian:$$CDF(z) = \frac1{2}\left(1 + {\mathrm erf}\left(\frac{z - \mu}{\sqrt{2 \sigma^2}}\right)\right) $$
###Code
from scipy.special import erf
def uncertain_gallup_model(gallup):
sigma = 3
prob = .5 * (1 + erf(gallup.Dem_Adv / np.sqrt(2 * sigma**2)))
return pd.DataFrame(dict(Obama=prob), index=gallup.index)
model = uncertain_gallup_model(gallup_2012)
model = model.join(predictwise.Votes)
prediction = simulate_election(model, 10000)
plot_simulation(prediction)
###Output
_____no_output_____ |
Logistic Regression/.ipynb_checkpoints/Baseline Analysis-checkpoint.ipynb | ###Markdown
Looks like we're missing two data point from Embarked some data from Age and a lot from CabinSolution:1. Drop the two in Embarked2. Drop cabin3. Try to use linear regression to predict the age for the null ones
###Code
data = data[~data.Embarked.isnull()]
data.drop('Cabin',axis=1,inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
I'm gonna make dummy variables for Pclass which is the fare class and Sex. Embarked will get the same treatment as well as it identifies the port at which the passenger embarked. I'll drop the first row of dummy variables because it can be inferred from the others
###Code
pclass = pd.get_dummies(data.Pclass,drop_first=True)
sex = pd.get_dummies(data.Sex,drop_first=True)
embarked = pd.get_dummies(data.Embarked,drop_first=True)
data = pd.concat([data,pclass,sex,embarked],axis=1)
del pclass
del sex
del embarked
data.head()
###Output
_____no_output_____
###Markdown
Sex, Pclass and Embarked are now redundant so I'll drop them. I'm also gonna drop Name and Ticket as they need further feature engineering to be useful. PasengerId gives no useful information so that'll be dropped as well
###Code
data.drop(['Sex','Pclass','Embarked','Name','Ticket','PassengerId'],axis=1,inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Now I'll split the data with non-null age values into a train and test set and try to learn the relationship between age and everything but Survived
###Code
survived = data.Survived
data.drop('Survived',inplace=True,axis=1)
data_good = data[~data.Age.isnull()]
data_good.head()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from regressors import stats
X_train, X_test, y_train, y_test = train_test_split(data_good.drop('Age',axis=1), data_good.Age, test_size=0.2, random_state=101)
X_train
lm = LinearRegression()
lm.fit(X_train,y_train)
coeff_df = pd.DataFrame(lm.coef_,data_good.drop('Age',axis=1).columns,columns=['Coefficient'])
coeff_df
from sklearn import metrics
predictions = lm.predict(X_test)
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
###Output
MAE: 9.826888993417393
MSE: 152.42313683007296
RMSE: 12.345976544205524
###Markdown
So the linear regression has a MAE of 9.8 years. I'm gonna fill the empty age values with the predictions from the linear regression model
###Code
data_notgood = data[data.Age.isnull()]
data_notgood.head()
lm.predict(data_notgood.drop('Age',axis=1))
###Output
_____no_output_____
###Markdown
It appears that the model is predicting negative values which are obviously wrong. This happens because the model does not respect the 0 bound. To address this, I'm gonna retrain the model using the natural log of the age and predict that.
###Code
lm = LinearRegression()
lm.fit(X_train,np.log(y_train))
coeff_df = pd.DataFrame(lm.coef_,data_good.drop('Age',axis=1).columns,columns=['Coefficient'])
coeff_df
from sklearn import metrics
predictions = np.exp(lm.predict(X_test))
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
data_notgood['Age'] = np.exp(lm.predict(data_notgood.drop('Age',axis=1)))
data.head()
data = pd.concat([data_good,data_notgood],axis=0)
data.sort_index(inplace=True)
data.head()
fig,ax = plt.subplots()
fig.set_size_inches(10,15)
sns.heatmap(data.isnull(),cbar=False,cmap='magma')
###Output
_____no_output_____
###Markdown
Now all null values have either been dropped or replaced by an "educated" guess, we can continue with training the logistic regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report
lrm = LogisticRegression(solver='lbfgs',max_iter=1000)
lrm = RandomForestClassifier(n_estimators=100)
X_train, X_test, y_train, y_test = train_test_split(data, survived, test_size=0.2, random_state=101)
lrm.fit(X_train,y_train)
lrm.score(X_test,y_test)
predictions = lrm.predict(X_test)
print(classification_report(y_test,predictions))
data
###Output
_____no_output_____
###Markdown
Now we have a baseline in place, lets try to predict from the test dataset. For this, the same preprocessing steps will need to be applied to the test set. Lets examine this dataset as well
###Code
data_test = pd.read_csv('test.csv')
passenger_id = data_test.PassengerId
data_test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Name 418 non-null object
Sex 418 non-null object
Age 332 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
###Markdown
It seems there is one missing value for the Fare. Lets see if we can quickly fill that in by analyzing the fare's relationship to the Pclass
###Code
sns.set_style('darkgrid')
sns.boxplot(x="Pclass",y="Fare",data=data_test,hue='Sex',showfliers=False)
###Output
_____no_output_____
###Markdown
So it seems the sex and cabin class can be a good identifier for fare. I'll just replace the missing fare with the mean of the fare for that class/sex combination
###Code
data_test[data_test.Fare.isnull()]
data_test['Fare'].fillna(data_test[(data_test.Pclass == 3) & (data_test.Sex == "male")].Fare.mean(),inplace=True)
data_test.drop('Cabin',axis=1,inplace=True)
pclass = pd.get_dummies(data_test.Pclass,drop_first=True)
sex = pd.get_dummies(data_test.Sex,drop_first=True)
embarked = pd.get_dummies(data_test.Embarked,drop_first=True)
data_test = pd.concat([data_test,pclass,sex,embarked],axis=1)
data_test.drop(['Sex','Pclass','Embarked','Name','Ticket','PassengerId'],axis=1,inplace=True)
data_test.head()
data_test_good = data_test[~data_test.Age.isnull()]
data_test_notgood = data_test[data_test.Age.isnull()]
data_test_notgood['Age'] = np.exp(lm.predict(data_test_notgood.drop('Age',axis=1)))
data_test = pd.concat([data_test_good,data_test_notgood],axis=0)
data_test.sort_index(inplace=True)
data_test.head()
predictions_test = lrm.predict(data_test)
out = pd.DataFrame(data = {"PassengerId": passenger_id, "Survived": predictions_test})
out.to_csv('submission.csv',index=False)
###Output
_____no_output_____ |
face_detect/MLPClassifier.ipynb | ###Markdown
First, let's unpack the data set from the ex4data1.mat, the data is available on the coursera site for the machine learning class https://www.coursera.org/learn/machine-learning tought by Andrew NG lecture 4. Also there is a number of clones that have this data file.
###Code
data = pd.read_csv('fer2013/fer2013.csv')
df = shuffle(df)
X = data['pixels']
y = data['emotion']
X = pd.Series([np.array(x.split()).astype(int) for x in X])
# convert one column as list of ints into dataframe where each item in array is a column
X = pd.DataFrame(np.matrix(X.tolist()))
df = pd.DataFrame(y)
df.loc[:,'f'] = pd.Series(-1, index=df.index)
df.groupby('emotion').count()
# This function plots the given sample set of images as a grid with labels
# if labels are available.
def plot_sample(S,w=48,h=48,labels=None):
m = len(S);
# Compute number of items to display
display_rows = int(np.floor(np.sqrt(m)));
display_cols = int(np.ceil(m / display_rows));
fig = plt.figure()
S = S.as_matrix()
for i in range(0,m):
arr = S[i,:]
arr = arr.reshape((w,h))
ax = fig.add_subplot(display_rows,display_cols , i+1)
ax.imshow(arr, aspect='auto', cmap=plt.get_cmap('gray'))
if labels is not None:
ax.text(0,0, '{}'.format(labels[i]), bbox={'facecolor':'white', 'alpha':0.8,'pad':2})
ax.axis('off')
plt.show()
print ('0=Angry', '1=Disgust', '2=Fear', '3=Happy', '4=Sad', '5=Surprise', '6=Neutral')
samples = X.sample(16)
plot_sample(samples,48,48,y[samples.index].as_matrix())
###Output
_____no_output_____
###Markdown
Now, let use the Neural Network with 1 hidden layers. The number of neurons in each layer is X_train.shape[1] which is 400 in our example (excluding the extra bias unit).
###Code
from sklearn.neural_network import MLPClassifier
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import label_binarize
# CALC AUC_ROC, binarizing each lable
y_b = pd.DataFrame(label_binarize(y, classes=[0,1,2,3,4,5,6]))
n_classes = y_b.shape[1]
# since the data we have is one big array, we want to split it into training
# and testing sets, the split is 70% goes to training and 30% of data for testing
X_train, X_test, y_train, y_test = train_test_split(X, y_b, test_size=0.3)
neural_network =(100,)
clfs ={}
for a in [1,0.1,1e-2,1e-3,1e-4,1e-5]:
# for this excersize we are using MLPClassifier with lbfgs optimizer (the family of quasi-Newton methods). In my simple
# experiments it produces good quality outcome
clf = MLPClassifier( alpha=a, hidden_layer_sizes=neural_network, random_state=1)
clf.fit(X_train, y_train)
# So after the classifier is trained, lets see what it predicts on the test data
prediction = clf.predict(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test.as_matrix()[:,i], prediction[:,i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.as_matrix().ravel(), prediction.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
print ("ROC_AUC (micro) score is {:.04f} with alpha {}".format(roc_auc["micro"], a))
clfs[a] = clf
samples = X_test.sample(16)
p = clfs.get(0.001).predict(samples)
plot_sample(samples,48,48,[x.argmax(axis=0) for x in p])
p=y_test.loc[samples.index].as_matrix()
plot_sample(samples,48,48,[x.argmax(axis=0) for x in p])
###Output
_____no_output_____ |
notebooks/DemoNeuralNetworks.ipynb | ###Markdown
Neural networks Neural networks inside [hep_ml](github.com/arogozhnikov/hep_ml) are very simple, but flexible. They are using [theano](http://deeplearning.net/software/theano/) library.**hep_ml.nnet** also provides tools to optimize any continuos expression as decision function (see below). Downloading datasetdownloading dataset from UCI and splitting it into train and test
###Code
!cd toy_datasets; wget -O ../data/MiniBooNE_PID.txt -nc MiniBooNE_PID.txt https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt
import numpy, pandas
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_auc_score
data = pandas.read_csv('../data/MiniBooNE_PID.txt', sep='\s*', skiprows=[0], header=None, engine='python')
labels = pandas.read_csv('../data/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None)
labels = [1] * labels[1].values[0] + [0] * labels[2].values[0]
data.columns = ['feature_{}'.format(key) for key in data.columns]
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
Example of training networkTraining multilayer perceptron with one hidden layer with 5 neurons.
###Code
from hep_ml.nnet import MLPClassifier
from sklearn.metrics import roc_auc_score
clf = MLPClassifier(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
proba = clf.predict_proba(test_data)
print 'Test quality:', roc_auc_score(test_labels, proba[:, 1])
proba = clf.predict_proba(train_data)
print 'Train quality:', roc_auc_score(train_labels, proba[:, 1])
###Output
Train quality: 0.971777980506
###Markdown
Creating own neural networkTo create own neural network, one should provide activation function and define parameters of network.You are not limited here to any kind of structure in this function, **hep_ml.nnet** will consider this as a black box for optimization.Simplest way is to override `prepare` method of `AbstractNeuralNetworkClassifier`.
###Code
from hep_ml.nnet import AbstractNeuralNetworkClassifier
from theano import tensor as T
class SimpleNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# creating parameters of neural network
W1 = self._create_matrix_parameter('W1', n1, n2)
W2 = self._create_matrix_parameter('W2', n2, n3)
# defining activation function
def activation(input):
first = T.nnet.sigmoid(T.dot(input, W1))
return T.dot(first, W2)
return activation
clf = SimpleNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print 'Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1])
###Output
Test quality: 0.967173363583
###Markdown
Using specific neural networkthis NN has one hidden layer, but it is quite strange
###Code
from hep_ml.nnet import PairwiseNeuralNetwork
clf = PairwiseNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print 'Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1])
###Output
Test quality: 0.972384121561
###Markdown
Creating very specific expression estimatorsWe can use **hep_ml.nnet** to optimize any expressions as black-boxfor simplicity, let's assume we have only three variables: $\text{var}_1, \text{var}_2, \text{var}_3.$And for some physical intuition we are sure that this is good expression to discriminate signal and background:$$\text{output} = c_1 \text{var}_1 + c_2 \log[ \exp(\text{var}_2 + \text{var}_3) + \exp(c_3)] + c_4 \dfrac{\text{var}_3}{\text{var}_2} + c_5 $$**Note**: I have written some random expression here, in practice it appears from physical intuition (or looking at the data).
###Code
class CustomNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# checking that we have three variables in input + constant
assert n1 == 3 + 1
# creating parameters
c1, c2, c3, c4, c5 = self._create_scalar_parameters('c1', 'c2', 'c3', 'c4', 'c5')
# defining activation function
def activation(input):
v1, v2, v3 = input[:, 0], input[:, 1], input[:, 2]
return c1 * v1 + c2 * T.log(T.exp(v2 + v3) + T.exp(c3)) + c4 * v3 / v2 + c5
return activation
###Output
_____no_output_____
###Markdown
Writing custom pretransformervery simple `scikit-learn` transformer which will transform each feature uniform to range [0, 1]
###Code
from sklearn.base import BaseEstimator, TransformerMixin
from rep.utils import Flattener
class Uniformer(BaseEstimator, TransformerMixin):
# leaving only 3 features and flattening each variable
def fit(self, X, y=None):
self.transformers = []
X = numpy.array(X, dtype=float)
for column in range(X.shape[1]):
self.transformers.append(Flattener(X[:, column]))
return self
def transform(self, X):
X = numpy.array(X, dtype=float)
assert X.shape[1] == len(self.transformers)
for column, trans in enumerate(self.transformers):
X[:, column] = trans(X[:, column])
return X
# selecting three features to train:
train_features = train_data.columns[:3]
clf = CustomNeuralNetwork(layers=[5], epochs=1000, scaler=Uniformer())
clf.fit(train_data[train_features], train_labels)
print 'Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data[train_features])[:, 1])
###Output
Test quality: 0.914575996678
###Markdown
Ensembling neural neworkslet's run AdaBoost algorithm over neural network.
###Code
from sklearn.ensemble import AdaBoostClassifier
base_nnet = base_estimator=MLPClassifier(layers=[5], scaler=Uniformer())
clf = AdaBoostClassifier(base_estimator=base_nnet, n_estimators=10)
clf.fit(train_data, train_labels)
print 'Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1])
###Output
Test quality: 0.977154302238
###Markdown
Neural networks Neural networks inside [hep_ml](github.com/arogozhnikov/hep_ml) are very simple, but flexible. They are using [theano](http://deeplearning.net/software/theano/) library.**hep_ml.nnet** also provides tools to optimize any continuos expression as a decision function (there is an example below). Downloading a datasetdownloading dataset from UCI and splitting it into train and test
###Code
!cd toy_datasets; wget -O ../data/MiniBooNE_PID.txt -nc MiniBooNE_PID.txt https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt
import numpy, pandas
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
data = pandas.read_csv('../data/MiniBooNE_PID.txt', sep='\s\s*', skiprows=[0], header=None, engine='python')
labels = pandas.read_csv('../data/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None)
labels = [1] * labels[1].values[0] + [0] * labels[2].values[0]
data.columns = ['feature_{}'.format(key) for key in data.columns]
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
Example of training a networkTraining multilayer perceptron with one hidden layer with 5 neurons. In most cases, we simply use `MLPClassifier` with one or two hidden layers.
###Code
from hep_ml.nnet import MLPClassifier
from sklearn.metrics import roc_auc_score
clf = MLPClassifier(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
proba = clf.predict_proba(test_data)
print('Test quality:', roc_auc_score(test_labels, proba[:, 1]))
proba = clf.predict_proba(train_data)
print('Train quality:', roc_auc_score(train_labels, proba[:, 1]))
###Output
Train quality: 0.9712880497811155
###Markdown
Creating your own neural networkTo create own neural network, one should provide activation function and define parameters of a network.You are not limited here to any kind of structure in this function, **hep_ml.nnet** will consider this as a black box for optimization.Simplest way is to override `prepare` method of `AbstractNeuralNetworkClassifier`.
###Code
from hep_ml.nnet import AbstractNeuralNetworkClassifier
from theano import tensor as T
class SimpleNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# creating parameters of neural network
W1 = self._create_matrix_parameter('W1', n1, n2)
W2 = self._create_matrix_parameter('W2', n2, n3)
# defining activation function
def activation(input):
first = T.nnet.sigmoid(T.dot(input, W1))
return T.dot(first, W2)
return activation
clf = SimpleNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
###Output
Test quality: 0.9667676784980118
###Markdown
Example of a very specific neural networkthis NN has one hidden layer, but the layer is quite strange, as it encounters correlations
###Code
from hep_ml.nnet import PairwiseNeuralNetwork
clf = PairwiseNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
###Output
Test quality: 0.9713998457401084
###Markdown
Fitting very specific expressions as estimatorsOne can use **hep_ml.nnet** to optimize any expressions as black-boxfor simplicity, let's assume we have only three variables: $\text{var}_1, \text{var}_2, \text{var}_3.$And for some physical intuition we are sure that this is good expression to discriminate signal and background:$$\text{output} = c_1 \text{var}_1 + c_2 \log \left[ \exp(\text{var}_2 + \text{var}_3) + \exp(c_3) \right] + c_4 \dfrac{\text{var}_3}{\text{var}_2} + c_5 $$**Note**: I have written some random expression here, in practice it appears from physical intuition (or after looking at the data).
###Code
class CustomNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# checking that we have three variables in input + constant
assert n1 == 3 + 1
# creating parameters
c1 = self._create_scalar_parameter('c1')
c2 = self._create_scalar_parameter('c2')
c3 = self._create_scalar_parameter('c3')
c4 = self._create_scalar_parameter('c4')
c5 = self._create_scalar_parameter('c5')
# defining activation function
def activation(input):
v1, v2, v3 = input[:, 0], input[:, 1], input[:, 2]
return c1 * v1 + c2 * T.log(T.exp(v2 + v3) + T.exp(c3)) + c4 * v3 / v2 + c5
return activation
###Output
_____no_output_____
###Markdown
Writing custom pretransformerBelow we define a very simple `scikit-learn` transformer which will transform each feature uniform to range [0, 1]
###Code
from sklearn.base import BaseEstimator, TransformerMixin
from rep.utils import Flattener
class Uniformer(BaseEstimator, TransformerMixin):
# leaving only 3 features and flattening each variable
def fit(self, X, y=None):
self.transformers = []
X = numpy.array(X, dtype=float)
for column in range(X.shape[1]):
self.transformers.append(Flattener(X[:, column]))
return self
def transform(self, X):
X = numpy.array(X, dtype=float)
assert X.shape[1] == len(self.transformers)
for column, trans in enumerate(self.transformers):
X[:, column] = trans(X[:, column])
return X
# selecting three features to train:
train_features = train_data.columns[:3]
clf = CustomNeuralNetwork(layers=[5], epochs=1000, scaler=Uniformer())
clf.fit(train_data[train_features], train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data[train_features])[:, 1]))
###Output
Test quality: 0.9145783471715777
###Markdown
Ensembling of neural neworkslet's run AdaBoost algorithm over neural network. Boosting of the networks is rarely seen in practice due to the high cost and minor positive effects (but it is not senseless)
###Code
from sklearn.ensemble import AdaBoostClassifier
base_nnet = MLPClassifier(layers=[5], scaler=Uniformer())
clf = AdaBoostClassifier(base_estimator=base_nnet, n_estimators=10)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
###Output
Test quality: 0.9771387726319765
|
Maxwell_Kwarteng_Project.ipynb | ###Markdown
###Code
!pip install pandas_datareader==0.7.0
# Import pandas datareader
import pandas_datareader
pandas_datareader.__version__
import pandas as pd
from pandas_datareader import data
# Set the start and end date
start_date = '2013-01-01'
end_date = '2019-10-01'
# Set the ticker
ticker = ['AMZN', 'GOOGL', 'FB', 'AAPL']
# Get the data
data = data.get_data_yahoo(ticker, start_date, end_date)
data.head()
import matplotlib.pyplot as plt
%matplotlib inline
data['Adj Close'].plot()
plt.show()
# Plot the adjusted close price
data['Adj Close'].plot(figsize=(12, 8))
# Define the label for the title of the figure
plt.title("Adjusted Close Price", fontsize=16)
# Define the labels for x-axis and y-axis
plt.ylabel('Price', fontsize=15)
plt.xlabel('Year', fontsize=15)
# Plot the grid lines
plt.grid(which="major", color='k', linestyle='-.', linewidth=0.5)
# Show the plot
plt.show()
print(data.shape)
###Output
_____no_output_____ |
Height vs Free Throw Percentage.ipynb | ###Markdown
Import Dependencies
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load Data
###Code
df = pd.read_csv('nbaPlayers.csv')
df
###Output
_____no_output_____
###Markdown
Preprocess Data
###Code
# Get number of years played as new column
df['Years Played'] = df.apply(lambda row: row.To - row.From, axis=1)
# Convert height from Feet to CM
def convertHeight(row):
height_in_feet = row['Ht']
height_in_cm = ((float(height_in_feet[0]) * 12) + float(height_in_feet[2:])) * 2.54
return height_in_cm
df['Height(cm)'] = df.apply(lambda row: convertHeight(row), axis=1)
df
# Get active players
active_players_df = df[df['To'] == 2021]
active_players_df
# Get active players with at least 1 years experience
experienced_players = active_players_df[active_players_df['Years Played'] >= 1]
experienced_players
# Convert filtered players into a csv file
experienced_players.to_csv('filteredPlayers.csv')
# Remove usernames of players
def remove_slugs(row):
name = row['Player']
name = name.split("|")[0]
return name
experienced_players['Player'] = experienced_players.apply(lambda row: remove_slugs(row), axis=1)
experienced_players
###Output
_____no_output_____
###Markdown
Working with basketball-reference-scraper API
###Code
from basketball_reference_scraper.players import get_stats
###Output
_____no_output_____
###Markdown
Add Career Free Throw Pct to experienced_players Dataframe The API used to collect the data was not able to get the Free Throw Percentage of all active and experienced players. Players that were left out were stored in a list. These players' Free Throw Percentage will be retrieved from the website (basketball-reference.com) and added manually.
###Code
experienced_players.insert(10, 'FT Pct', 0)
players_left = []
def get_avg_ft_pct(row):
player = row['Player']
ft_avg = 0
try:
player_stats = get_stats(player)
ft_avg = player_stats['FT%'].mean()
except:
players_left.append(player)
ft_avg = 'N/A'
return ft_avg
experienced_players['FT Pct'] = experienced_players.apply(lambda row: get_avg_ft_pct(row), axis=1)
experienced_players
len(players_left)
players_left
###Output
_____no_output_____
###Markdown
Drop unneccessary columns
###Code
columns_to_drop = ['From', 'To', 'Pos', 'Ht', 'Wt', 'Birth Date', 'Colleges']
required_data = experienced_players.drop(columns=columns_to_drop)
required_data
###Output
_____no_output_____
###Markdown
Generate graph
###Code
# To add the free throw percentage data for the remaining players, the required data is saved into a CSV file and updated manually
# required_data.to_csv('requiredData.csv')
# After updating the CSV file containing the required data, the modified CSV file is loaded to generate a graph and calculate correlation
data_df = pd.read_csv('requiredData.csv')
ax1 = data_df.plot.scatter(x='Height(cm)', y='FT Pct')
###Output
_____no_output_____
###Markdown
Calculating Pearson Product-Moment Correlation Coefficient
###Code
# Get Covariance of both variables
height_and_ft_df = data_df.drop(columns=['Unnamed: 0', 'Player', 'Years Played'])
height_and_ft_df.cov()
# We know that the covariance of the two variables is -0.315670
covariance_x_y = -0.315670
# Get standard deviation of each variable
height_and_ft_df.std()
standard_deviation_x = 8.737559
standard_deviation_y = 0.103652
# Plug in calculated values into correlation formula
coefficient = covariance_x_y / (standard_deviation_x * standard_deviation_y)
coefficient
###Output
_____no_output_____ |
notebooks/Figure 5 - UMAP d60 vs d35.ipynb | ###Markdown
Load UMAP model and analysis dataframe
###Code
working_dir = '/data/datasets/organoid_phenotyping/analysis/d35_vs_d60/'
umap = joblib.load(os.path.join(working_dir, 'model_d35_d60.umap'))
analysis = pd.read_csv(os.path.join(working_dir, 'analysis.csv'), index_col=0)
analysis
###Output
_____no_output_____
###Markdown
Get all profiles and labels
###Code
n = 5000
np.random.seed(1)
dfs = []
for org in tqdm(analysis.index, total=len(analysis)):
folder = analysis['type'].loc[org]
org_dir = os.path.join(working_dir, folder, org)
profiles = np.load(os.path.join(org_dir, 'dataset/cyto_profiles.npy'))
# sample_idx = np.load(os.path.join(org_dir, 'dataset/cyto_sample_index.npy'))
labels = np.load(os.path.join(org_dir, 'cyto_labels.npy'))
profiles_sample, labels_sample = randomly_sample(n, profiles, labels)
x = umap.transform(profiles_sample.reshape((len(profiles_sample), -1)))
df = pd.DataFrame({'x': x[:, 0],
'y': x[:, 1],
'label': labels_sample,
'organoid': len(x) * [org],
'type': len(x) * [folder]})
dfs.append(df)
df = pd.concat(dfs)
df['type'].unique()
n_orgs = 4
plt.figure(figsize=(6, 18))
plt.subplot(3, 1, 1)
sns.scatterplot(x='x', y='y', hue='type', data=df, edgecolor=None, s=2, alpha=0.3, palette=['b', 'r'])
plt.title('d35 vs d60')
plt.xlabel('UMAP 1')
plt.ylabel('UMAP 2')
plt.subplot(3, 1, 2)
sns.scatterplot(x='x', y='y', hue='organoid', data=df.where(df['type'] == 'Lancaster_d35').dropna().iloc[:n_orgs*n],
edgecolor=None, s=3, alpha=1, legend=None)
plt.title('d35')
plt.xlabel('UMAP 1')
plt.ylabel('UMAP 2')
plt.subplot(3, 1, 3)
sns.scatterplot(x='x', y='y', hue='organoid', data=df.where(df['type'] == 'Lancaster_d60').dropna().iloc[2*n:(n_orgs+2)*n],
edgecolor=None, s=3, alpha=1, legend=None)
plt.title('d60')
plt.xlabel('UMAP 1')
plt.ylabel('UMAP 2')
plt.tight_layout()
sns.despine()
plt.savefig(os.path.join(working_dir, 'umap_d35_vs_d60.pdf'), bbox_inches='tight')
plt.show()
plt.figure(figsize=(6, 6))
sns.scatterplot(x='x', y='y', hue='label', data=df)
plt.tight_layout()
sns.despine()
# plt.savefig(os.path.join(working_dir, 'umap_d35_vs_d60_clusters.pdf'), bbox_inches='tight')
plt.show()
df.where(df['label'] == 0).dropna()
###Output
_____no_output_____ |
Regridding Transform.ipynb | ###Markdown
Automatic regridding of ECMWF data to match CESM The purpose of this notebook is to automate regirdding of two datasets based on the one with the lowest resolution.
###Code
import xarray as xr
import pandas as pd
import numpy as np
from spharm import Spharmt, regrid
###Output
_____no_output_____
###Markdown
Below are a set of functions useful for the regridding
###Code
def regrid_field(field, lat, lon, lat_new, lon_new):
nlat_old, nlon_old = np.size(lat), np.size(lon)
nlat_new, nlon_new = np.size(lat_new), np.size(lon_new)
spec_old = Spharmt(nlon_old, nlat_old, gridtype='regular', legfunc='computed')
spec_new = Spharmt(nlon_new, nlat_new, gridtype='regular', legfunc='computed')
field_new = []
for field_old in field:
regridded_field = regrid(spec_old, spec_new, field_old, ntrunc=None, smooth=None)
field_new.append(regridded_field)
field_new = np.array(field_new)
return field_new
###Output
_____no_output_____
###Markdown
Prepare for MIC
###Code
input_file_CESM : 'CWLFilePathInput' = "CESM_12month.nc"
input_file_ECMWF : 'CWLFilePathInput' = "ECMWF_12month.nc"
output_file : 'CWLFilePathOutput' = "ECMWF_regridded.nc"
###Output
_____no_output_____
###Markdown
Load the data
###Code
model = ['ECMWF','CESM']
ds_ecmwf=xr.open_dataset(input_file_ECMWF)
ds_cesm=xr.open_dataset(input_file_CESM)
###Output
_____no_output_____
###Markdown
ECMWF handles decreasing latitude. Reverse for comparison with CESM
###Code
#Inverse the latitude information in that file
ds_ecmwf = ds_ecmwf.reindex(latitude=list(reversed(ds_ecmwf.latitude)))
###Output
_____no_output_____
###Markdown
RegriddingThe next two cells perform the actual regridding operations for the two variables of interest.
###Code
TS_regrid = regrid_field(ds_ecmwf['t2m'].values,
ds_ecmwf['latitude'].values,
ds_ecmwf['longitude'].values,
ds_cesm['lat'].values,
ds_cesm['lon'].values)
tp_regrid = regrid_field(ds_ecmwf['tp'].values,
ds_ecmwf['latitude'].values,
ds_ecmwf['longitude'].values,
ds_cesm['lat'].values,
ds_cesm['lon'].values)
###Output
_____no_output_____
###Markdown
Repack into a netcdf
###Code
time = ds_ecmwf['time']
latitude = ds_cesm['lat']
longitude = ds_cesm['lon']
t2m = xr.DataArray(TS_regrid,coords=[time,latitude,longitude],dims=["time","latitude","longitude"])
t2m.attrs=ds_ecmwf['t2m'].attrs
tp = xr.DataArray(tp_regrid,coords=[time,latitude,longitude],dims=["time","latitude","longitude"])
tp.attrs=ds_ecmwf['tp'].attrs
# Pack into a new dataset
ds = t2m.to_dataset(name='t2m')
ds['tp'] = tp
ds.attrs = ds_ecmwf.attrs
ds.attrs['description'] = "This dataset was regridded to CESM grid"
#Inverse the latitude to go back to ECMWF standard
ds=ds.sortby('latitude', ascending=False)
ds.to_netcdf(output_file)
###Output
_____no_output_____
###Markdown
Automatic regridding of ECMWF data to match CESM The purpose of this notebook is to automate regirdding of two datasets based on the one with the lowest resolution.
###Code
import xarray as xr
import pandas as pd
import numpy as np
from spharm import Spharmt, regrid
###Output
_____no_output_____
###Markdown
Below are a set of functions useful for the regridding
###Code
def regrid_field(field, lat, lon, lat_new, lon_new):
nlat_old, nlon_old = np.size(lat), np.size(lon)
nlat_new, nlon_new = np.size(lat_new), np.size(lon_new)
spec_old = Spharmt(nlon_old, nlat_old, gridtype='regular', legfunc='computed')
spec_new = Spharmt(nlon_new, nlat_new, gridtype='regular', legfunc='computed')
field_new = []
for field_old in field:
regridded_field = regrid(spec_old, spec_new, field_old, ntrunc=None, smooth=None)
field_new.append(regridded_field)
field_new = np.array(field_new)
return field_new
###Output
_____no_output_____
###Markdown
Load the data
###Code
model = ['ECMWF','CESM']
ds_ecmwf=xr.open_dataset('ECMWF_12month.nc')
ds_cesm=xr.open_dataset('CESM_12month.nc')
###Output
_____no_output_____
###Markdown
ECMWF handles decreasing latitude. Reverse for comparison with CESM
###Code
#Inverse the latitude information in that file
ds_ecmwf = ds_ecmwf.reindex(latitude=list(reversed(ds_ecmwf.latitude)))
###Output
_____no_output_____
###Markdown
RegriddingThe next two cells perform the actual regridding operations for the two variables of interest.
###Code
TS_regrid = regrid_field(ds_ecmwf['t2m'].values,
ds_ecmwf['latitude'].values,
ds_ecmwf['longitude'].values,
ds_cesm['lat'].values,
ds_cesm['lon'].values)
tp_regrid = regrid_field(ds_ecmwf['tp'].values,
ds_ecmwf['latitude'].values,
ds_ecmwf['longitude'].values,
ds_cesm['lat'].values,
ds_cesm['lon'].values)
###Output
_____no_output_____
###Markdown
Repack into a netcdf
###Code
time = ds_ecmwf['time']
latitude = ds_cesm['lat']
longitude = ds_cesm['lon']
t2m = xr.DataArray(TS_regrid,coords=[time,latitude,longitude],dims=["time","latitude","longitude"])
t2m.attrs=ds_ecmwf['t2m'].attrs
tp = xr.DataArray(tp_regrid,coords=[time,latitude,longitude],dims=["time","latitude","longitude"])
tp.attrs=ds_ecmwf['tp'].attrs
# Pack into a new dataset
ds = t2m.to_dataset(name='t2m')
ds['tp'] = tp
ds.attrs = ds_ecmwf.attrs
ds.attrs['description'] = "This dataset was regridded to CESM grid"
#Inverse the latitude to go back to ECMWF standard
ds=ds.sortby('latitude', ascending=False)
ds.to_netcdf('ECMWF_regridded_12month.nc')
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.