code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
## Portfolio Exercise: Starbucks
<br>
<img src="https://opj.ca/wp-content/uploads/2018/02/New-Starbucks-Logo-1200x969.jpg" width="200" height="200">
<br>
<br>
#### Background Information
The dataset you will be provided in this portfolio exercise was originally used as a take-home assignment provided by Starbucks for their job candidates. The data for this exercise consists of about 120,000 data points split in a 2:1 ratio among training and test files. In the experiment simulated by the data, an advertising promotion was tested to see if it would bring more customers to purchase a specific product priced at $10. Since it costs the company 0.15 to send out each promotion, it would be best to limit that promotion only to those that are most receptive to the promotion. Each data point includes one column indicating whether or not an individual was sent a promotion for the product, and one column indicating whether or not that individual eventually purchased that product. Each individual also has seven additional features associated with them, which are provided abstractly as V1-V7.
#### Optimization Strategy
Your task is to use the training data to understand what patterns in V1-V7 to indicate that a promotion should be provided to a user. Specifically, your goal is to maximize the following metrics:
* **Incremental Response Rate (IRR)**
IRR depicts how many more customers purchased the product with the promotion, as compared to if they didn't receive the promotion. Mathematically, it's the ratio of the number of purchasers in the promotion group to the total number of customers in the purchasers group (_treatment_) minus the ratio of the number of purchasers in the non-promotional group to the total number of customers in the non-promotional group (_control_).
$$ IRR = \frac{purch_{treat}}{cust_{treat}} - \frac{purch_{ctrl}}{cust_{ctrl}} $$
* **Net Incremental Revenue (NIR)**
NIR depicts how much is made (or lost) by sending out the promotion. Mathematically, this is 10 times the total number of purchasers that received the promotion minus 0.15 times the number of promotions sent out, minus 10 times the number of purchasers who were not given the promotion.
$$ NIR = (10\cdot purch_{treat} - 0.15 \cdot cust_{treat}) - 10 \cdot purch_{ctrl}$$
For a full description of what Starbucks provides to candidates see the [instructions available here](https://drive.google.com/open?id=18klca9Sef1Rs6q8DW4l7o349r8B70qXM).
Below you can find the training data provided. Explore the data and different optimization strategies.
#### How To Test Your Strategy?
When you feel like you have an optimization strategy, complete the `promotion_strategy` function to pass to the `test_results` function.
From past data, we know there are four possible outomes:
Table of actual promotion vs. predicted promotion customers:
<table>
<tr><th></th><th colspan = '2'>Actual</th></tr>
<tr><th>Predicted</th><th>Yes</th><th>No</th></tr>
<tr><th>Yes</th><td>I</td><td>II</td></tr>
<tr><th>No</th><td>III</td><td>IV</td></tr>
</table>
The metrics are only being compared for the individuals we predict should obtain the promotion – that is, quadrants I and II. Since the first set of individuals that receive the promotion (in the training set) receive it randomly, we can expect that quadrants I and II will have approximately equivalent participants.
Comparing quadrant I to II then gives an idea of how well your promotion strategy will work in the future.
Get started by reading in the data below. See how each variable or combination of variables along with a promotion influences the chance of purchasing. When you feel like you have a strategy for who should receive a promotion, test your strategy against the test dataset used in the final `test_results` function.
```
# load in packages
from itertools import combinations
from test_results import test_results, score
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sk
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
# load in the data
train_data = pd.read_csv('./training.csv')
train_data.head()
# Cells for you to work and document as necessary -
# definitely feel free to add more cells as you need
def promotion_strategy(df):
'''
INPUT
df - a dataframe with *only* the columns V1 - V7 (same as train_data)
OUTPUT
promotion_df - np.array with the values
'Yes' or 'No' related to whether or not an
individual should recieve a promotion
should be the length of df.shape[0]
Ex:
INPUT: df
V1 V2 V3 V4 V5 V6 V7
2 30 -1.1 1 1 3 2
3 32 -0.6 2 3 2 2
2 30 0.13 1 1 4 2
OUTPUT: promotion
array(['Yes', 'Yes', 'No'])
indicating the first two users would recieve the promotion and
the last should not.
'''
return promotion
# This will test your results, and provide you back some information
# on how well your promotion_strategy will work in practice
test_results(promotion_strategy)
```
| github_jupyter |
[Index](Index.ipynb) - [Next](Widget List.ipynb)
# Simple Widget Introduction
## What are widgets?
Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc.
## What can they be used for?
You can use widgets to build **interactive GUIs** for your notebooks.
You can also use widgets to **synchronize stateful and stateless information** between Python and JavaScript.
## Using widgets
To use the widget framework, you need to import `ipywidgets`.
```
import ipywidgets as widgets
```
### repr
Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are displayed inside the output area below the code cell. Clearing cell output will also remove the widget.
```
widgets.IntSlider()
```
### display()
You can also explicitly display the widget using `display(...)`.
```
from IPython.display import display
w = widgets.IntSlider()
display(w)
```
### Multiple display() calls
If you display the same widget twice, the displayed instances in the front-end will remain in sync with each other. Try dragging the slider below and watch the slider above.
```
display(w)
```
## Why does displaying the same widget twice work?
Widgets are represented in the back-end by a single object. Each time a widget is displayed, a new representation of that same object is created in the front-end. These representations are called views.

### Closing widgets
You can close a widget by calling its `close()` method.
```
display(w)
w.close()
```
## Widget properties
All of the IPython widgets share a similar naming scheme. To read the value of a widget, you can query its `value` property.
```
w = widgets.IntSlider()
display(w)
w.value
```
Similarly, to set a widget's value, you can set its `value` property.
```
w.value = 100
```
### Keys
In addition to `value`, most widgets share `keys`, `description`, and `disabled`. To see the entire list of synchronized, stateful properties of any specific widget, you can query the `keys` property.
```
w.keys
```
### Shorthand for setting the initial values of widget properties
While creating a widget, you can set some or all of the initial values of that widget by defining them as keyword arguments in the widget's constructor (as seen below).
```
widgets.Text(value='Hello World!', disabled=True)
```
## Linking two similar widgets
If you need to display the same value two different ways, you'll have to use two different widgets. Instead of attempting to manually synchronize the values of the two widgets, you can use the `link` or `jslink` function to link two properties together (the difference between these is discussed in [Widget Events](Widget Events.ipynb)). Below, the values of two widgets are linked together.
```
a = widgets.FloatText()
b = widgets.FloatSlider()
display(a,b)
mylink = widgets.jslink((a, 'value'), (b, 'value'))
```
### Unlinking widgets
Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object. Try changing one of the widgets above after unlinking to see that they can be independently changed.
```
# mylink.unlink()
```
[Index](Index.ipynb) - [Next](Widget List.ipynb)
| github_jupyter |
```
import numpy as np
from pandas import Series, DataFrame
import pandas as pd
from sklearn import preprocessing, tree
from sklearn.metrics import accuracy_score
# from sklearn.model_selection import train_test_split, KFold
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import KFold
df=pd.read_json('../01_Preprocessing/First.json').sort_index()
df.head(2)
def mydist(x, y):
return np.sum((x-y)**2)
def jaccard(a, b):
intersection = float(len(set(a) & set(b)))
union = float(len(set(a) | set(b)))
return 1.0 - (intersection/union)
# http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
dist=['braycurtis','canberra','chebyshev','cityblock','correlation','cosine','euclidean','dice','hamming','jaccard','kulsinski','matching','rogerstanimoto','russellrao','sokalsneath','yule']
algorithm=['ball_tree', 'kd_tree', 'brute']
len(dist)
```
## On country (only MS)
```
df.fund= df.fund=='TRUE'
df.gre= df.gre=='TRUE'
df.highLevelBachUni= df.highLevelBachUni=='TRUE'
df.highLevelMasterUni= df.highLevelMasterUni=='TRUE'
df.uniRank.fillna(294,inplace=True)
df.columns
oldDf=df.copy()
df=df[['countryCoded','degreeCoded','engCoded', 'fieldGroup','fund','gpaBachelors','gre', 'highLevelBachUni', 'paper','uniRank']]
df=df[df.degreeCoded==0]
del df['degreeCoded']
bestAvg=[]
for alg in algorithm:
for dis in dist:
k_fold = KFold(n=len(df), n_folds=5)
scores = []
try:
clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)
except Exception as err:
# print(alg,dis,'err')
continue
for train_indices, test_indices in k_fold:
xtr = df.iloc[train_indices,(df.columns != 'countryCoded')]
ytr = df.iloc[train_indices]['countryCoded']
xte = df.iloc[test_indices, (df.columns != 'countryCoded')]
yte = df.iloc[test_indices]['countryCoded']
clf.fit(xtr, ytr)
ypred = clf.predict(xte)
acc=accuracy_score(list(yte),list(ypred))
scores.append(acc*100)
print(alg,dis,np.average(scores))
bestAvg.append(np.average(scores))
print('>>>>>>>Best: ',np.max(bestAvg))
```
## On Fund (only MS)
```
bestAvg=[]
for alg in algorithm:
for dis in dist:
k_fold = KFold(n=len(df), n_folds=5)
scores = []
try:
clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)
except Exception as err:
continue
for train_indices, test_indices in k_fold:
xtr = df.iloc[train_indices, (df.columns != 'fund')]
ytr = df.iloc[train_indices]['fund']
xte = df.iloc[test_indices, (df.columns != 'fund')]
yte = df.iloc[test_indices]['fund']
clf.fit(xtr, ytr)
ypred = clf.predict(xte)
acc=accuracy_score(list(yte),list(ypred))
score=acc*100
scores.append(score)
if (len(bestAvg)>1) :
if(score > np.max(bestAvg)) :
bestClf=clf
bestAvg.append(np.average(scores))
print (alg,dis,np.average(scores))
print('>>>>>>>Best: ',np.max(bestAvg))
```
### Best : ('kd_tree', 'cityblock', 77.692144892144896)
```
me=[0,2,0,2.5,False,False,1.5,400]
n=bestClf.kneighbors([me])
n
for i in n[1]:
print(xtr.iloc[i])
```
| github_jupyter |
```
import warnings
import collections
import os
import pandas as pd # manage data
import pickle as pk # load and save python objects
import numpy as np # matrix operations
import matplotlib.pyplot as plt
import unidecode # Deal with codifications
import regex # use regular expresions
from email.header import Header, decode_header # e-mails helper functions
from nltk.tokenize import word_tokenize # Natural Language Toolkit
from selectolax.parser import HTMLParser # Optimized html library
from tqdm import tqdm # For loops decorator
warnings.filterwarnings('ignore')
%matplotlib inline
# Helper functions
def get_text_from _html(html):
'''
Extracted from https://rushter.com/blog/python-fast-html-parser/ to eliminate html tags from email body
Parameters
html: html file
Return
text: html text content
'''
tree = HTMLParser(html)
if tree.body is None:
return html
for tag in tree.css('script'):
tag.decompose()
for tag in tree.css('style'):
tag.decompose()
text = unidecode.unidecode(tree.body.text(separator=' '))
return text
def clean_mail_subject(mail_header):
'''
Clean mail subject
Parameters
mail_header: email.Header object or string or None.
Return
decoded_header: string containing mail subject
'''
if type(mail_header) == Header:
decoded_header = decode_header(mail_header.encode())[0][0].decode('utf-8')
else:
decoded_header = mail_header
if decoded_header[:5] == 'Fwd: ':
decoded_header = decoded_header[5:]
elif decoded_header[:4] == 'Re: ':
decoded_header = decoded_header[4:]
decoded_header = re.sub(r"[^a-zA-Z?.!,¿]+", " ", decoded_header)
return decoded_header
def clean_mail_body(str_):
'''Clean mail body'''
str_ = str_.replace('\t', '')
new_str = regex.split(r'(\bEl \d{1,2} de [a-z]+ de 2\d{3},)', str_)[0]
new_str = regex.split(r'(\bOn \d{1,2} de [a-z]+. de 2\d{3},)', new_str)[0]
if len(new_str) > 0:
return new_str
else:
return str_
def filter_firm(str_):
'''Clean mail firm'''
new_str = regex.split(r'(Adela C. Santillana Figueroa)|(Claudia Alarcon Burga)|(Miguel Koch Zavaleta)|(Rocio Villavicencio Ripas)|(Maria Alejandra Alba S.)|(Fiorella.)|(Fiorella Romero Cardenas)|(Directora de Servicios Academicos y Registro)|(Asistente Administrativ[a|o])|(Servicios Academicos y Registro)|(FORMAMOS LIDERES RESPONSABLES PARA EL MUNDO)|(up.edu.pe)|(Jr. Sanchez Cerro 2141 Jesus Maria, Lima 11)|(T. 511-219-0100 Ext. [0-9]{4})|([a-zA-z0-9-.][email protected])|(Pensemos en el AMBIENTE antes de imprimir este mensaje)', str_)[0]
if len(new_str) > 0:
return new_str
else:
return str_
# # Define output dir
# outDir = 'output/'
# actualDir = 'data_cleaning_nlp'
# print()
# if not(actualDir in os.listdir(outDir)):
# os.mkdir(os.path.join(outDir, actualDir))
# print('output dir created')
# else:
# print('output dir already created')
# print()
ROOT = "~/Documents/TF_chatbot"
input_file = "../../../text_data/mails.txt"
with open(input_file, "r") as input_f:
for lines in input_f:
s = lines.readline()
# for mail in mails:
# for item in mail:
# output.write(str(item) + '\t')
# output.write('\n')
s
# Load complete email data
mails = pk.load(open("../../../text_data/mails.txt", 'rb'))
df = pd.DataFrame(mails, columns=['id','subject','date','sender','recipient','body'])
df.info()
print()
print(df.isna().sum())
print()
df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True) # transform dates to datetime format
df['date'].describe()
```
#### Periodo de los datos: 6 ciclos y medio (+ 3 ciclos-0)
```
df['date'].hist(bins=51, figsize=(10,5))
plt.xlim(df['date'].min(), df['date'].max())
plt.title('Histograma de la Fecha de Envío del Mensaje')
plt.ylabel('Número de Mensajes')
plt.xlabel('Año-Mes')
plt.show()
#plt.savefig('hist_fecha.svg', format='svg')
df['month'] = df['date'].dt.month
df['dayofweek'] = df['date'].dt.dayofweek
# Plot for month and day of week variables
day_value_counts = (df['dayofweek'].value_counts()/df.shape[0])*100
month_value_counts = (df['month'].value_counts()/df.shape[0])*100
monthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre']
daynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo']
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10,10))
month_value_counts.plot(ax=ax1, rot=0, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b')
ax1.set_xticklabels(monthnames_ES)
ax1.set_ylabel('% de Mensajes')
ax1.set_xlabel('Mes')
day_value_counts.plot(ax=ax2, rot=0, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b')
ax2.set_xticklabels(daynames_ES)
ax2.set_ylabel('% de Mensajes')
ax2.set_xlabel('Día de la Semana')
plt.tight_layout()
plt.show()
#fig.savefig('grafico_barras_dia_mes.svg', format='svg')
#fig.savefig('grafico_barras_dia_mes.png', format='png')
%%time
df['body'] = df['body'].apply(get_text_selectolax) # filter text from hltm emails
# Extract sender and recipient email only
df['sender_email'] = df.sender.str.extract("([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)")[0].str.lower()
df['recipient_email'] = df.recipient.str.extract("([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)")[0].str.lower()
print()
print(df.isna().sum())
print()
# eliminate 'no reply' and 'automatic' msgs
df_noreply = df[~df.sender.str.contains('[email protected]').fillna(False)]
df_noautom = df_noreply[~df_noreply.subject.str.contains('Respuesta automática').fillna(False)]
# Separate msgs by type of sender
send_by_alumns = df_noautom[df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)]
send_by_no_alumns = df_noautom[~df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)]
send_by_internals = df_noautom[df_noautom.sender.str.contains('@up.edu.pe').fillna(False)]
print('# msgs send by alumns:', len(send_by_alumns))
print('# of alumns that send msgs:', len(send_by_alumns.sender_email.unique()))
len(send_by_internals)
# Clean mails subject
send_by_internals['subject'] = send_by_internals['subject'].apply(filterResponses)
```
## Email pairing algorithm
1. Extrae los mensajes enviados por alumno y los mensajes enviados por usuarios internos a cada alumno, respectivamente
2. Extrae el asunto de cada mensaje del punto 1. Si el asunto del mensaje es igual al asunto enviado en el mensaje anterior aumenta el contador de mensajes con el mismo asunto.
3. Utilizando en contador de mensajes con el mismo asunto, busca el asunto extraido en el punto 2 entre los emails enviados por usuarios internos a ese alumno.
4. Genera una lista con el asunto, los datos del mail enviado por el alumno y la respuesta que recibió.
```
# Separate mails sended to each alumn
dfs = [send_by_internals[send_by_internals.recipient_email == alumn] for alumn in send_by_alumns.sender_email.unique()]
unique_alumns = send_by_alumns.sender_email.unique()
n = len(unique_alumns)
# Count causes to not being able to process a text
resp_date_bigger_than_input_date = 0
responses_with_same_subject_lower_than_counter = 0
subject_equal_none = 0
n_obs_less_than_0 = 0
repited_id = 0
for i, alumn in tqdm(enumerate(unique_alumns), total=n):
if len(dfs[i]) > 0:
temp_ = send_by_alumns[send_by_alumns.sender_email == alumn]
indexes = temp_.index
counter_subject = 0
subject_pre = 'initial_value'
for index in indexes:
subject = filterResponses(temp_.subject[index])
if subject != None:
if subject_pre == subject:
counter_subject += 1
else:
counter_subject = 0
subject_pre = subject
if len(dfs[i][dfs[i]['subject'] == subject]) > counter_subject:
input_date = temp_.loc[index, 'date']
resp_date = dfs[i]['date'][dfs[i]['subject'] == subject].iloc[counter_subject]
if input_date < resp_date:
input_id, sender, recipient, input_body = temp_.loc[index, ['id','sender','recipient','body']]
resp_id, resp_body = dfs[i][['id','body']][dfs[i]['subject'] == subject].iloc[counter_subject]
pair = np.array([[subject, sender, recipient, input_id, input_date, input_body, resp_id, resp_date, resp_body]],dtype=object)
if i == 0:
pairs = np.array(pair)
elif all([not(pair[0,3] in pairs[:,3]), not(pair[0,6] in pairs[:,6])]):
pairs = np.append(pairs, pair, axis=0)
else:
repited_id += 1
else:
resp_date_bigger_than_input_date += 1
else:
responses_with_same_subject_lower_than_counter += 1
else:
subject_equal_none += 1
else:
n_obs_less_than_0 += 1
```
# Format data
```
total_unpaired_mails = repited_id+resp_date_bigger_than_input_date+responses_with_same_subject_lower_than_counter+subject_equal_none+n_obs_less_than_0
print()
print('Filtros del algoritmo de emparejamiento')
print('resp_date_bigger_than_input_date:',resp_date_bigger_than_input_date)
print('subject_equal_none:',subject_equal_none)
print('repited_id:', repited_id)
print('no hay motivo pero no lo empareje:',len(send_by_alumns) - total_unpaired_mails - len(pairs) )
print('-'*50)
print('motivos de sar:')
print('el ultimo mensaje de la cadena del asunto no tuvo respuesta:',responses_with_same_subject_lower_than_counter)
print('no le respondieron ni el primer mensaje:',n_obs_less_than_0)
print('-'*50)
print('# of mails in total:', len(mails))
print('# msgs send by alumns:', len(send_by_alumns))
print('# of paired emails:', len(pairs))
print('% de paired mails:', round((len(pairs)/len(send_by_alumns))*100,2),'%')
print('total of unpaired mails: ', total_unpaired_mails)
print('% de unpaired mails:', round((total_unpaired_mails/len(send_by_alumns))*100,2),'%')
print()
# Load paired mails in a DataFrame
columns_names = ['subject', 'sender', 'recipient', 'input_id', 'input_date', 'input_body', 'resp_id', 'resp_date', 'resp_body']
paired_mails = pd.DataFrame(data=pairs, columns=columns_names)
paired_mails['input_date'] = pd.to_datetime(paired_mails['input_date'], infer_datetime_format=True)
paired_mails['resp_date'] = pd.to_datetime(paired_mails['resp_date'], infer_datetime_format=True)
paired_mails['input_month'] = paired_mails['input_date'].dt.month
paired_mails['input_dayofweek'] = paired_mails['input_date'].dt.dayofweek
# Plot for month and day of week variables
day_value_counts = (paired_mails['input_dayofweek'].value_counts()/df.shape[0])*100
month_value_counts = (paired_mails['input_month'].value_counts()/df.shape[0])*100
monthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre']
daynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo']
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(5,5))
month_value_counts.plot(ax=ax1, rot=45, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b')
ax1.set_xticklabels(monthnames_ES)
ax1.set_ylabel('% de Mensajes')
ax1.set_xlabel('Mes')
day_value_counts.plot(ax=ax2, rot=45, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b')
ax2.set_xticklabels(daynames_ES)
ax2.set_ylabel('% de Mensajes')
ax2.set_xlabel('Día de la Semana')
plt.tight_layout()
plt.show()
#fig.savefig('grafico_barras_dia_mes.svg', format='svg')
#fig.savefig('grafico_barras_dia_mes.png', format='png')
paired_mails['input_date'].hist(bins=51, figsize=(10*1.5,5*1.5), color='blue')
plt.xlim(df['date'].min(), df['date'].max())
plt.title('Histograma de la Fecha de Envío del Mensaje de Alumnos',fontsize=20)
plt.ylabel('Número de Mensajes',fontsize=15)
plt.xlabel('Año-Mes',fontsize=15)
plt.yticks(fontsize=12.5)
plt.xticks(fontsize=12.5)
plt.savefig('hist_fecha_inputs.svg', dpi=300, format='svg')
plt.show()
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15*1.25,5*1.25))
for historyDir in historyDirs:
params = historyDir.replace('.pk','').split('_')[-4:]
try:
history = pickle.load(open(historyDir,'rb'))
ax1.plot(range(len(history['loss'])), history['loss'], linewidth=5)
ax1.grid(True)
ax1.set_ylabel('Entropía Cruzada (Error)',fontsize=20)
ax1.set_xlabel('Época',fontsize=20)
ax1.set_title('Entrenamiento',fontsize=20)
ax1.set_xlim(-0.5, 100)
ax1.set
ax2.plot(range(len(history['val_loss'])), history['val_loss'], linewidth=5)
ax2.grid(True)
ax2.set_xlabel('Época',fontsize=20)
ax2.set_title('Validación',fontsize=20)
plt.suptitle('Curvas de Error',fontsize=25)
ax2.set_xlim(-0.5, 100)
except:
pass
fig.savefig('curvas_error.svg', dpi=300, format='svg')
paired_mails['resp_date'].hist(bins=51, figsize=(10,5), color='blue')
plt.xlim(df['date'].min(), df['date'].max())
plt.title('Histograma de la Fecha de Envío del Mensaje hacia Alumnos')
plt.ylabel('Número de Mensajes')
plt.xlabel('Año-Mes')
plt.show()
#fig.savefig('hist_fecha_resps.svg', format='svg')
# Create features to detect possible errors
paired_mails['resp_time'] = paired_mails['resp_date'] - paired_mails['input_date']
paired_mails['input_body_len'] = paired_mails['input_body'].apply(len)
paired_mails['resp_body_len'] = paired_mails['resp_body'].apply(len)
# Calculate input messages lenghts
input_len_stats = paired_mails['input_body_len'].describe([0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round()
print()
print(input_len_stats)
print()
# Calculate response messages lenghts
resp_len_stats = paired_mails['resp_body_len'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round()
print()
print(resp_len_stats)
print()
# Response time analysis
resp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99])
print()
print(resp_time_stats)
print()
# Filter errors using response time
paired_mails = paired_mails[paired_mails['resp_time'] <= paired_mails['resp_time'].sort_values().iloc[-65]]
# Filter errors using messages body lenghts
paired_mails = paired_mails[paired_mails['input_body_len'] <= paired_mails['input_body_len'].sort_values().iloc[-3]]
# not errors caught using resp_body_len
# Response time analysis
resp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99])
print()
print(resp_time_stats)
print()
paired_mails['input_body'] = paired_mails['input_body'].apply(filterMail)
paired_mails['resp_body'] = paired_mails['resp_body'].apply(filterMail)
paired_mails['resp_body'] = paired_mails['resp_body'].apply(filterFirm)
sentence_pairs = paired_mails[['input_body','resp_body']]
sentence_pairs.to_csv('output/data_cleaning_nlp/q_and_a.txt', sep='\t', index=False, header=False)
paired_mails['input_body'] = paired_mails['input_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x))
paired_mails['resp_body'] = paired_mails['resp_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x))
paired_mails.to_csv('output/data_cleaning_nlp/paired_emails.csv', encoding='utf-8', index=False)
```
## NLP
```
## Tokenization using NLTK
# Define input (x) and target (y) sequences variables
x = [word_tokenize(msg, language='spanish') for msg in paired_mails['input_body'].values]
y = [word_tokenize(msg, language='spanish') for msg in paired_mails['resp_body'].values]
# Variables to store lenghts
hist_len_inp = []
hist_len_out = []
maxlen_inp = 0
maxlen_out = 0
# Define word counter
word_freqs_inp = collections.Counter()
word_freqs_out = collections.Counter()
num_recs = 0
for inp, out in zip(x, y):
# Get input and target sequence lenght
hist_len_inp.append(len(inp))
hist_len_out.append(len(out))
# Calculate max sequence lenght
if len(inp) > maxlen_inp: maxlen_inp = len(inp)
if len(out) > maxlen_out: maxlen_out = len(out)
# Count unique words
for words in inp:
word_freqs_inp[words] += 1
for words in out:
word_freqs_out[words] += 1
num_recs += 1
print()
print("maxlen input:", maxlen_inp)
print("maxlen output:", maxlen_out)
print("features (words) - input:", len(word_freqs_inp))
print("features (words) - output:", len(word_freqs_out))
print("number of records:", num_recs)
print()
plt.hist(hist_len_inp, bins =100)
plt.xlim((0,850))
plt.xticks(range(0,800,100))
plt.title('input_len')
plt.show()
plt.hist(hist_len_out, bins=100)
plt.xlim((0,850))
plt.xticks(range(0,800,100))
plt.title('output_len')
plt.show()
pk.dump(word_freqs_inp, open('output/data_cleaning_nlp/word_freqs_input.pk', 'wb'))
pk.dump(word_freqs_out, open('output/data_cleaning_nlp/word_freqs_output.pk', 'wb'))
pk.dump(x, open('output/data_cleaning_nlp/input_data.pk', 'wb'))
pk.dump(y, open('output/data_cleaning_nlp/target_data.pk', 'wb'))
```
| github_jupyter |
```
from sklearn import *
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import cross_validation
from sklearn import tree
from sklearn import neighbors
from sklearn import svm
from sklearn import ensemble
from sklearn import cluster
from sklearn import model_selection
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns #for graphics and figure styling
import pandas as pd
data = pd.read_csv('adult.data.txt', sep=", ", encoding='latin1', header=None)
data.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']
data.head()
data.info()
from sklearn.preprocessing import LabelEncoder
data = data.apply(LabelEncoder().fit_transform)
dataIncomeColumn = data.Income
dataIncomeColumn.head()
data= data.drop('Income', axis=1)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data)
data
standardized_data = scaler.transform(data)
data_Test = pd.read_csv('adult.test.txt', sep=", ", encoding='latin1', header=None)
data_Test.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']
enc = LabelEncoder()
data_Test = data_Test.apply(LabelEncoder().fit_transform)
data_TestIncomeColumn = data_Test.Income
data_Test=data_Test.drop('Income', axis=1)
data_Test
data_TestIncomeColumn.head()
standardized_test_data = scaler.transform(data_Test)
standardized_test_data
a=0
b=0
for col in data:
for i in data[col].isnull():
if i:
a+=1
b+=1
print('Missing data in',col,'is',a/b*100,'%')
a=0
b=0
##check for missing data
##so now, we have standardized_data and standardized_test_data that we can run our models on
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
censusIDM = RandomForestClassifier(max_depth=3, random_state=0)
from sklearn.feature_selection import RFE
rfe = RFE(censusIDM, n_features_to_select=6)
rfe.fit(standardized_data, dataIncomeColumn)
rfe.ranking_
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#standardized_data for the training
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
#good=(predictOutput==dataIncomeColumn).sum();good - for training error#
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
good/(good+bad)*100
goodTest/(goodTest+badTest)*100
#Using the Random Forest Classifier on our Data, with depth 3.
censusIDM = RandomForestClassifier(max_depth=3, random_state=0)
frfe = RFE(censusIDM, n_features_to_select=3)
frfe.fit(standardized_data, dataIncomeColumn)
print(frfe.ranking_)
predict_TestOutput=frfe.predict(standardized_test_data)
predictOutput=frfe.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
#Using the Random Forest Classifier on our Data, with depth 7.
censusIDM = RandomForestClassifier(max_depth=7, random_state=0)
frfe = RFE(censusIDM, n_features_to_select=3)
frfe.fit(standardized_data, dataIncomeColumn)
print(frfe.ranking_)
predict_TestOutput=frfe.predict(standardized_test_data)
predictOutput=frfe.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
#Testing the Linear Regression Model on a large numer of different features to select to see if the accuracy changes significantly or not
from sklearn.linear_model import LinearRegression
beerIDM = linear_model.LogisticRegression()
rfe2 = RFE(beerIDM, n_features_to_select=4)
rfe2.fit(standardized_data, dataIncomeColumn)
print(rfe2.ranking_)
predict_TestOutput=rfe2.predict(standardized_test_data)
predictOutput=rfe2.predict(standardized_data)
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
good/(good+bad)
n=50
precision=[0]*n
for i in range(1,n+1):
censusIDM = RandomForestClassifier(max_depth=i, random_state=0)
rfe = RFE(censusIDM, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();
good=(predictOutput==dataIncomeColumn).sum();
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();
bad=(predictOutput!=dataIncomeColumn).sum();
precision[i-1]=good/(good+bad);
fig=plt.figure(figsize=[20,10])
plt.plot(range(1,n+1),precision)
plt.xlabel('Depth', fontsize=20)
plt.ylabel('Precision', fontsize=20)
plt.title('RandomForestClassifier', fontsize=20)
fig.savefig('RandomForest2.pdf',dpi=200)
#Linear Model Lasso curently not working.
from sklearn import linear_model
clf = linear_model.Lasso(alpha=0.1)
rfe = RFE(clf, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
print(rfe.ranking_)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
#Running the Perceptron Model on our data
from sklearn.linear_model import Perceptron
clf = linear_model.Perceptron()
rfe = RFE(clf, n_features_to_select=4)
rfe.fit(standardized_data, dataIncomeColumn)
print(rfe.ranking_)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
standardized_data2 = pd.DataFrame(standardized_data)
standardized_test_data2 = pd.DataFrame(standardized_test_data)
standardizedFrames = [standardized_data2, standardized_test_data2]
standardizedResult = pd.concat(standardizedFrames)
dataIncomeColumn2 = pd.DataFrame(dataIncomeColumn)
data_TestIncomeColumn2 = pd.DataFrame(data_TestIncomeColumn)
combinedIncomeColumn = [dataIncomeColumn2, data_TestIncomeColumn2]
combinedResult = pd.concat(combinedIncomeColumn)
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
#where L is in the loop
rng = np.random.RandomState(42)
yy = []
heldout = [0.95, 0.90, .85, .8, 0.75, .7, .65, 0.6, .55, 0.5, 0.45, 0.4, 0.35, .3, .25, .2, .15, .1, .05, 0.01]
xx = 1. - np.array(heldout)
rounds = 20
for i in heldout:
yy_ = []
for r in range(rounds):
#clf = SGDClassifier()
clf = SVR(kernel="linear")
standardized_dataL, standardized_test_dataL, dataIncomeColumnL, data_TestIncomeColumnL = \
train_test_split(standardizedResult, combinedResult, test_size=i, random_state=rng)
clf.fit(standardized_dataL, dataIncomeColumnL)
y_pred = clf.predict(standardized_test_dataL)
yy_.append(1 - sum(y_pred == data_TestIncomeColumnL.Income)/len(y_pred))
yy.append(np.mean(yy_))
plt.plot(xx, yy, label='Linear Regression')
plt.legend(loc="upper right")
plt.xlabel("Proportion train")
plt.ylabel("Test Error Rate")
plt.show()
fig=plt.figure(figsize=[20,10])
plt.plot(xx, yy, label='Linear Regression')
plt.legend(loc="upper right")
plt.xlabel("Proportion train")
plt.ylabel("Test Error Rate")
plt.show()
fig.savefig('test2png.pdf', dpi=100)
xx,yy
#k-nearest Neighbors
from sklearn.neighbors import NearestNeighbors
clf = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(standardized_data)
predict_TestOutput=clf.predict(standardized_test_data)
predictOutput=clf.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(verbose=0, random_state=0)
mlp.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=mlp.predict(standardized_test_data)
predictOutput=mlp.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
#SVM
from sklearn.svm import SVR
clf = SVR(kernel="linear")
rfe4 = RFE(clf, n_features_to_select=5)
rfe4.fit(standardized_data, dataIncomeColumn)
predict_TestOutput=rfe.predict(standardized_test_data)
predictOutput=rfe.predict(standardized_data)
#Predictive Accuracy
goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)
good=(predictOutput==dataIncomeColumn).sum();print(good)
badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)
bad=(predictOutput!=dataIncomeColumn).sum();print(bad)
#Running The Random Forest OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
#Running The Extra Trees OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 1
max_estimators = 25
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
#Running The Extra Trees OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
xss=[0]*3
yss=[0]*3
i=0
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
xss[i]=xs
yss[i]=ys
i=i+1
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
plt.plot(xss[0],yss[0],'v');
plt.plot(xss[2],yss[2],'o');
plt.plot(xss[1],yss[1],'-')
yss=np.asarray(yss)
xss=np.asarray(xss)
help(plt.plot
)
#Running The Random Forest OOB Error Rate Chart
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
#Running The Extra Trees Test Error Plot
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("ExtraTreesClassifier, max_features='sqrt'",
ExtraTreesClassifier(warm_start=True, max_features='sqrt',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTreesClassifier, max_features='log2'",
ExtraTreesClassifier(warm_start=True, max_features='log2',
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE)),
("ExtraTrees, max_features=None",
ExtraTreesClassifier(warm_start=True, max_features=None,
oob_score=True, bootstrap=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 5
max_estimators = 300
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("Test Error Rate")
plt.legend(loc="upper right")
plt.show()
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, max_features='sqrt',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 20
max_estimators = 30
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(standardized_data, dataIncomeColumn)
# Record the OOB error for each `n_estimators=i` setting.
y_pred = clf.predict(standardized_test_data)
test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))
error_rate[label].append((i, test_errorCLF))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("Test error rate")
plt.legend(loc="upper right")
plt.show()
```
| github_jupyter |
# Matplotlib
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code.
Library documentation: <a>http://matplotlib.org/</a>
```
# needed to display the graphs
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 5, 10)
y = x ** 2
fig = plt.figure()
# left, bottom, width, height (range 0 to 1)
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x, y, 'r')
axes.set_xlabel('x')
axes.set_ylabel('y')
axes.set_title('title');
fig = plt.figure()
axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes
axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
# main figure
axes1.plot(x, y, 'r')
axes1.set_xlabel('x')
axes1.set_ylabel('y')
axes1.set_title('title')
# insert
axes2.plot(y, x, 'g')
axes2.set_xlabel('y')
axes2.set_ylabel('x')
axes2.set_title('insert title');
fig, axes = plt.subplots(nrows=1, ncols=2)
for ax in axes:
ax.plot(x, y, 'r')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('title')
fig.tight_layout()
# example with a legend and latex symbols
fig, ax = plt.subplots()
ax.plot(x, x**2, label=r"$y = \alpha^2$")
ax.plot(x, x**3, label=r"$y = \alpha^3$")
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'$\alpha$', fontsize=18)
ax.set_ylabel(r'$y$', fontsize=18)
ax.set_title('title');
# line customization
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x, x+1, color="blue", linewidth=0.25)
ax.plot(x, x+2, color="blue", linewidth=0.50)
ax.plot(x, x+3, color="blue", linewidth=1.00)
ax.plot(x, x+4, color="blue", linewidth=2.00)
# possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’
ax.plot(x, x+5, color="red", lw=2, linestyle='-')
ax.plot(x, x+6, color="red", lw=2, ls='-.')
ax.plot(x, x+7, color="red", lw=2, ls=':')
# custom dash
line, = ax.plot(x, x+8, color="black", lw=1.50)
line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...
# possible marker symbols: marker = '+', 'o', '*', 's', ',', '.',
# '1', '2', '3', '4', ...
ax.plot(x, x+ 9, color="green", lw=2, ls='*', marker='+')
ax.plot(x, x+10, color="green", lw=2, ls='*', marker='o')
ax.plot(x, x+11, color="green", lw=2, ls='*', marker='s')
ax.plot(x, x+12, color="green", lw=2, ls='*', marker='1')
# marker size and color
ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2)
ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4)
ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8,
markerfacecolor="red")
ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=2, markeredgecolor="blue");
# axis controls
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
# scaling
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
# axis grid
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
# twin axes example
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
# other plot styles
xx = np.linspace(-0.75, 1., 100)
n = array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
# histograms
n = np.random.randn(100000)
fig, axes = plt.subplots(1, 2, figsize=(12,4))
axes[0].hist(n)
axes[0].set_title("Default histogram")
axes[0].set_xlim((min(n), max(n)))
axes[1].hist(n, cumulative=True, bins=50)
axes[1].set_title("Cumulative detailed histogram")
axes[1].set_xlim((min(n), max(n)));
# annotations
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
# color map
alpha = 0.7
phi_ext = 2 * pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return ( + alpha - 2 * cos(phi_p)*cos(phi_m) -
alpha * cos(phi_ext - 2*phi_p))
phi_m = linspace(0, 2*pi, 100)
phi_p = linspace(0, 2*pi, 100)
X,Y = meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*pi), Y/(2*pi), Z,
cmap=cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
from mpl_toolkits.mplot3d.axes3d import Axes3D
# surface plots
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d'
# keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap=cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
# wire frame
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
# contour plot with projections
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-pi, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-pi, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*pi, cmap=cm.coolwarm)
ax.set_xlim3d(-pi, 2*pi);
ax.set_ylim3d(0, 3*pi);
ax.set_zlim3d(-pi, 2*pi);
```
| github_jupyter |
##### Copyright 2018 The TF-Agents Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/2_environments_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/2_environments_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Introduction
The goal of Reinforcement Learning (RL) is to design agents that learn by interacting with an environment. In the standard RL setting, the agent receives an observation at every time step and chooses an action. The action is applied to the environment and the environment returns a reward and a new observation. The agent trains a policy to choose actions to maximize the sum of rewards, also known as return.
In TF-Agents, environments can be implemented either in Python or TensorFlow. Python environments are usually easier to implement, understand, and debug, but TensorFlow environments are more efficient and allow natural parallelization. The most common workflow is to implement an environment in Python and use one of our wrappers to automatically convert it into TensorFlow.
Let us look at Python environments first. TensorFlow environments follow a very similar API.
## Setup
If you haven't installed tf-agents or gym yet, run:
```
!pip install tf-agents
!pip install 'gym==0.10.11'
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
tf.compat.v1.enable_v2_behavior()
```
## Python Environments
Python environments have a `step(action) -> next_time_step` method that applies an action to the environment, and returns the following information about the next step:
1. `observation`: This is the part of the environment state that the agent can observe to choose its actions at the next step.
2. `reward`: The agent is learning to maximize the sum of these rewards across multiple steps.
3. `step_type`: Interactions with the environment are usually part of a sequence/episode. e.g. multiple moves in a game of chess. step_type can be either `FIRST`, `MID` or `LAST` to indicate whether this time step is the first, intermediate or last step in a sequence.
4. `discount`: This is a float representing how much to weight the reward at the next time step relative to the reward at the current time step.
These are grouped into a named tuple `TimeStep(step_type, reward, discount, observation)`.
The interface that all python environments must implement is in `environments/py_environment.PyEnvironment`. The main methods are:
```
class PyEnvironment(object):
def reset(self):
"""Return initial_time_step."""
self._current_time_step = self._reset()
return self._current_time_step
def step(self, action):
"""Apply action and return new time_step."""
if self._current_time_step is None:
return self.reset()
self._current_time_step = self._step(action)
return self._current_time_step
def current_time_step(self):
return self._current_time_step
def time_step_spec(self):
"""Return time_step_spec."""
@abc.abstractmethod
def observation_spec(self):
"""Return observation_spec."""
@abc.abstractmethod
def action_spec(self):
"""Return action_spec."""
@abc.abstractmethod
def _reset(self):
"""Return initial_time_step."""
@abc.abstractmethod
def _step(self, action):
"""Apply action and return new time_step."""
self._current_time_step = self._step(action)
return self._current_time_step
```
In addition to the `step()` method, environments also provide a `reset()` method that starts a new sequence and provides an initial `TimeStep`. It is not necessary to call the `reset` method explicitly. We assume that environments reset automatically, either when they get to the end of an episode or when step() is called the first time.
Note that subclasses do not implement `step()` or `reset()` directly. They instead override the `_step()` and `_reset()` methods. The time steps returned from these methods will be cached and exposed through `current_time_step()`.
The `observation_spec` and the `action_spec` methods return a nest of `(Bounded)ArraySpecs` that describe the name, shape, datatype and ranges of the observations and actions respectively.
In TF-Agents we repeatedly refer to nests which are defined as any tree like structure composed of lists, tuples, named-tuples, or dictionaries. These can be arbitrarily composed to maintain structure of observations and actions. We have found this to be very useful for more complex environments where you have many observations and actions.
### Using Standard Environments
TF Agents has built-in wrappers for many standard environments like the OpenAI Gym, DeepMind-control and Atari, so that they follow our `py_environment.PyEnvironment` interface. These wrapped evironments can be easily loaded using our environment suites. Let's load the CartPole environment from the OpenAI gym and look at the action and time_step_spec.
```
environment = suite_gym.load('CartPole-v0')
print('action_spec:', environment.action_spec())
print('time_step_spec.observation:', environment.time_step_spec().observation)
print('time_step_spec.step_type:', environment.time_step_spec().step_type)
print('time_step_spec.discount:', environment.time_step_spec().discount)
print('time_step_spec.reward:', environment.time_step_spec().reward)
```
So we see that the environment expects actions of type `int64` in [0, 1] and returns `TimeSteps` where the observations are a `float32` vector of length 4 and discount factor is a `float32` in [0.0, 1.0]. Now, let's try to take a fixed action `(1,)` for a whole episode.
```
action = np.array(1, dtype=np.int32)
time_step = environment.reset()
print(time_step)
while not time_step.is_last():
time_step = environment.step(action)
print(time_step)
```
### Creating your own Python Environment
For many clients, a common use case is to apply one of the standard agents (see agents/) in TF-Agents to their problem. To do this, they have to frame their problem as an environment. So let us look at how to implement an environment in Python.
Let's say we want to train an agent to play the following (Black Jack inspired) card game:
1. The game is played using an infinite deck of cards numbered 1...10.
2. At every turn the agent can do 2 things: get a new random card, or stop the current round.
3. The goal is to get the sum of your cards as close to 21 as possible at the end of the round, without going over.
An environment that represents the game could look like this:
1. Actions: We have 2 actions. Action 0: get a new card, and Action 1: terminate the current round.
2. Observations: Sum of the cards in the current round.
3. Reward: The objective is to get as close to 21 as possible without going over, so we can achieve this using the following reward at the end of the round:
sum_of_cards - 21 if sum_of_cards <= 21, else -21
```
class CardGameEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=0, name='observation')
self._state = 0
self._episode_ended = False
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def _step(self, action):
if self._episode_ended:
# The last action ended the episode. Ignore the current action and start
# a new episode.
return self.reset()
# Make sure episodes don't go on forever.
if action == 1:
self._episode_ended = True
elif action == 0:
new_card = np.random.randint(1, 11)
self._state += new_card
else:
raise ValueError('`action` should be 0 or 1.')
if self._episode_ended or self._state >= 21:
reward = self._state - 21 if self._state <= 21 else -21
return ts.termination(np.array([self._state], dtype=np.int32), reward)
else:
return ts.transition(
np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
```
Let's make sure we did everything correctly defining the above environment. When creating your own environment you must make sure the observations and time_steps generated follow the correct shapes and types as defined in your specs. These are used to generate the TensorFlow graph and as such can create hard to debug problems if we get them wrong.
To validate our environment we will use a random policy to generate actions and we will iterate over 5 episodes to make sure things are working as intended. An error is raised if we receive a time_step that does not follow the environment specs.
```
environment = CardGameEnv()
utils.validate_py_environment(environment, episodes=5)
```
Now that we know the environment is working as intended, let's run this environment using a fixed policy: ask for 3 cards and then end the round.
```
get_new_card_action = np.array(0, dtype=np.int32)
end_round_action = np.array(1, dtype=np.int32)
environment = CardGameEnv()
time_step = environment.reset()
print(time_step)
cumulative_reward = time_step.reward
for _ in range(3):
time_step = environment.step(get_new_card_action)
print(time_step)
cumulative_reward += time_step.reward
time_step = environment.step(end_round_action)
print(time_step)
cumulative_reward += time_step.reward
print('Final Reward = ', cumulative_reward)
```
### Environment Wrappers
An environment wrapper takes a python environment and returns a modified version of the environment. Both the original environment and the modified environment are instances of `py_environment.PyEnvironment`, and multiple wrappers can be chained together.
Some common wrappers can be found in `environments/wrappers.py`. For example:
1. `ActionDiscretizeWrapper`: Converts a continuous action space to a discrete action space.
2. `RunStats`: Captures run statistics of the environment such as number of steps taken, number of episodes completed etc.
3. `TimeLimit`: Terminates the episode after a fixed number of steps.
#### Example 1: Action Discretize Wrapper
InvertedPendulum is a PyBullet environment that accepts continuous actions in the range `[-2, 2]`. If we want to train a discrete action agent such as DQN on this environment, we have to discretize (quantize) the action space. This is exactly what the `ActionDiscretizeWrapper` does. Compare the `action_spec` before and after wrapping:
```
env = suite_gym.load('Pendulum-v0')
print('Action Spec:', env.action_spec())
discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)
print('Discretized Action Spec:', discrete_action_env.action_spec())
```
The wrapped `discrete_action_env` is an instance of `py_environment.PyEnvironment` and can be treated like a regular python environment.
## TensorFlow Environments
The interface for TF environments is defined in `environments/tf_environment.TFEnvironment` and looks very similar to the Python environments. TF Environments differ from python envs in a couple of ways:
* They generate tensor objects instead of arrays
* TF environments add a batch dimension to the tensors generated when compared to the specs.
Converting the python environments into TFEnvs allows tensorflow to parallelize operations. For example, one could define a `collect_experience_op` that collects data from the environment and adds to a `replay_buffer`, and a `train_op` that reads from the `replay_buffer` and trains the agent, and run them in parallel naturally in TensorFlow.
```
class TFEnvironment(object):
def time_step_spec(self):
"""Describes the `TimeStep` tensors returned by `step()`."""
def observation_spec(self):
"""Defines the `TensorSpec` of observations provided by the environment."""
def action_spec(self):
"""Describes the TensorSpecs of the action expected by `step(action)`."""
def reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
return self._reset()
def current_time_step(self):
"""Returns the current `TimeStep`."""
return self._current_time_step()
def step(self, action):
"""Applies the action and returns the new `TimeStep`."""
return self._step(action)
@abc.abstractmethod
def _reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
@abc.abstractmethod
def _current_time_step(self):
"""Returns the current `TimeStep`."""
@abc.abstractmethod
def _step(self, action):
"""Applies the action and returns the new `TimeStep`."""
```
The `current_time_step()` method returns the current time_step and initializes the environment if needed.
The `reset()` method forces a reset in the environment and returns the current_step.
If the `action` doesn't depend on the previous `time_step` a `tf.control_dependency` is needed in `Graph` mode.
For now, let us look at how `TFEnvironments` are created.
### Creating your own TensorFlow Environment
This is more complicated than creating environments in Python, so we will not cover it in this colab. An example is available [here](https://github.com/tensorflow/agents/blob/master/tf_agents/environments/tf_environment_test.py). The more common use case is to implement your environment in Python and wrap it in TensorFlow using our `TFPyEnvironment` wrapper (see below).
### Wrapping a Python Environment in TensorFlow
We can easily wrap any Python environment into a TensorFlow environment using the `TFPyEnvironment` wrapper.
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
print(isinstance(tf_env, tf_environment.TFEnvironment))
print("TimeStep Specs:", tf_env.time_step_spec())
print("Action Specs:", tf_env.action_spec())
```
Note the specs are now of type: `(Bounded)TensorSpec`.
### Usage Examples
#### Simple Example
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
# reset() creates the initial time_step after resetting the environment.
time_step = tf_env.reset()
num_steps = 3
transitions = []
reward = 0
for i in range(num_steps):
action = tf.constant([i % 2])
# applies the action and returns the new TimeStep.
next_time_step = tf_env.step(action)
transitions.append([time_step, action, next_time_step])
reward += next_time_step.reward
time_step = next_time_step
np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)
print('\n'.join(map(str, np_transitions)))
print('Total reward:', reward.numpy())
```
#### Whole Episodes
```
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
time_step = tf_env.reset()
rewards = []
steps = []
num_episodes = 5
for _ in range(num_episodes):
episode_reward = 0
episode_steps = 0
while not time_step.is_last():
action = tf.random.uniform([1], 0, 2, dtype=tf.int32)
time_step = tf_env.step(action)
episode_steps += 1
episode_reward += time_step.reward.numpy()
rewards.append(episode_reward)
steps.append(episode_steps)
time_step = tf_env.reset()
num_steps = np.sum(steps)
avg_length = np.mean(steps)
avg_reward = np.mean(rewards)
print('num_episodes:', num_episodes, 'num_steps:', num_steps)
print('avg_length', avg_length, 'avg_reward:', avg_reward)
```
| github_jupyter |
# `Практикум по программированию на языке Python`
<br>
## `Занятие 2: Пользовательские и встроенные функции, итераторы и генераторы`
<br><br>
### `Мурат Апишев ([email protected])`
#### `Москва, 2021`
### `Функции range и enumerate`
```
r = range(2, 10, 3)
print(type(r))
for e in r:
print(e, end=' ')
for index, element in enumerate(list('abcdef')):
print(index, element, end=' ')
```
### `Функция zip`
```
z = zip([1, 2, 3], 'abc')
print(type(z))
for a, b in z:
print(a, b, end=' ')
for e in zip('abcdef', 'abc'):
print(e)
for a, b, c, d in zip('abc', [1,2,3], [True, False, None], 'xyz'):
print(a, b, c, d)
```
### `Определение собственных функций`
```
def function(arg_1, arg_2=None):
print(arg_1, arg_2)
function(10)
function(10, 20)
```
Функция - это тоже объект, её имя - просто символическая ссылка:
```
f = function
f(10)
print(function is f)
```
### `Определение собственных функций`
```
retval = f(10)
print(retval)
def factorial(n):
return n * factorial(n - 1) if n > 1 else 1 # recursion
print(factorial(1))
print(factorial(2))
print(factorial(4))
```
### `Передача аргументов в функцию`
Параметры в Python всегда передаются по ссылке
```
def function(scalar, lst):
scalar += 10
print(f'Scalar in function: {scalar}')
lst.append(None)
print(f'Scalar in function: {lst}')
s, l = 5, []
function(s, l)
print(s, l)
```
### `Передача аргументов в функцию`
```
def f(a, *args):
print(type(args))
print([v for v in [a] + list(args)])
f(10, 2, 6, 8)
def f(*args, a):
print([v for v in [a] + list(args)])
print()
f(2, 6, 8, a=10)
def f(a, *args, **kw):
print(type(kw))
print([v for v in [a] + list(args) + [(k, v) for k, v in kw.items()]])
f(2, *(6, 8), **{'arg1': 1, 'arg2': 2})
```
### `Области видимости переменных`
В Python есть 4 основных уровня видимости:
- Встроенная (buildins) - на этом уровне находятся все встроенные объекты (функции, классы исключений и т.п.)<br><br>
- Глобальная в рамках модуля (global) - всё, что определяется в коде модуля на верхнем уровне<br><br>
- Объемлюшей функции (enclosed) - всё, что определено в функции верхнего уровня<br><br>
- Локальной функции (local) - всё, что определено в функции нижнего уровня
<br><br>
Есть ещё области видимости переменных циклов, списковых включений и т.п.
### `Правило разрешения области видимости LEGB при чтении`
```
def outer_func(x):
def inner_func(x):
return len(x)
return inner_func(x)
print(outer_func([1, 2]))
```
Кто определил имя `len`?
- на уровне вложенной функции такого имени нет, смотрим выше
- на уровне объемлющей функции такого имени нет, смотрим выше
- на уровне модуля такого имени нет, смотрим выше
- на уровне builtins такое имя есть, используем его
### `На builtins можно посмотреть`
```
import builtins
counter = 0
lst = []
for name in dir(builtins):
if name[0].islower():
lst.append(name)
counter += 1
if counter == 5:
break
lst
```
Кстати, то же самое можно сделать более pythonic кодом:
```
list(filter(lambda x: x[0].islower(), dir(builtins)))[: 5]
```
### `Локальные и глобальные переменные`
```
x = 2
def func():
print('Inside: ', x) # read
func()
print('Outside: ', x)
x = 2
def func():
x += 1 # write
print('Inside: ', x)
func() # UnboundLocalError: local variable 'x' referenced before assignment
print('Outside: ', x)
x = 2
def func():
x = 3
x += 1
print('Inside: ', x)
func()
print('Outside: ', x)
```
### `Ключевое слово global`
```
x = 2
def func():
global x
x += 1 # write
print('Inside: ', x)
func()
print('Outside: ', x)
x = 2
def func(x):
x += 1
print('Inside: ', x)
return x
x = func(x)
print('Outside: ', x)
```
### `Ключевое слово nonlocal`
```
a = 0
def out_func():
b = 10
def mid_func():
c = 20
def in_func():
global a
a += 100
nonlocal c
c += 100
nonlocal b
b += 100
print(a, b, c)
in_func()
mid_func()
out_func()
```
__Главный вывод:__ не надо злоупотреблять побочными эффектами при работе с переменными верхних уровней
### `Пример вложенных функций: замыкания`
- В большинстве случаев вложенные функции не нужны, плоская иерархия будет и проще, и понятнее
- Одно из исключений - фабричные функции (замыкания)
```
def function_creator(n):
def function(x):
return x ** n
return function
f = function_creator(5)
f(2)
```
Объект-функция, на который ссылается `f`, хранит в себе значение `n`
### `Анонимные функции`
- `def` - не единственный способ объявления функции
- `lambda` создаёт анонимную (lambda) функцию
Такие функции часто используются там, где синтаксически нельзя записать определение через `def`
```
def func(x): return x ** 2
func(6)
lambda_func = lambda x: x ** 2 # should be an expression
lambda_func(6)
def func(x): print(x)
func(6)
lambda_func = lambda x: print(x ** 2) # as print is function in Python 3.*
lambda_func(6)
```
### `Встроенная функция sorted`
```
lst = [5, 2, 7, -9, -1]
def abs_comparator(x):
return abs(x)
print(sorted(lst, key=abs_comparator))
sorted(lst, key=lambda x: abs(x))
sorted(lst, key=lambda x: abs(x), reverse=True)
```
### `Встроенная функция filter`
```
lst = [5, 2, 7, -9, -1]
f = filter(lambda x: x < 0, lst) # True condition
type(f) # iterator
list(f)
```
### `Встроенная функция map`
```
lst = [5, 2, 7, -9, -1]
m = map(lambda x: abs(x), lst)
type(m) # iterator
list(m)
```
### `Ещё раз сравним два подхода`
Напишем функцию скалярного произведения в императивном и функциональном стилях:
```
def dot_product_imp(v, w):
result = 0
for i in range(len(v)):
result += v[i] * w[i]
return result
dot_product_func = lambda v, w: sum(map(lambda x: x[0] * x[1], zip(v, w)))
print(dot_product_imp([1, 2, 3], [4, 5, 6]))
print(dot_product_func([1, 2, 3], [4, 5, 6]))
```
### `Функция reduce`
`functools` - стандартный модуль с другими функциями высшего порядка.
Рассмотрим пока только функцию `reduce`:
```
from functools import reduce
lst = list(range(1, 10))
reduce(lambda x, y: x * y, lst)
```
### `Итерирование, функции iter и next`
```
r = range(3)
for e in r:
print(e)
it = iter(r) # r.__iter__() - gives us an iterator
print(next(it))
print(it.__next__())
print(next(it))
print(next(it))
```
### `Итераторы часто используются неявно`
Как выглядит для нас цикл `for`:
```
for i in 'seq':
print(i)
```
Как он работает на самом деле:
```
iterator = iter('seq')
while True:
try:
i = next(iterator)
print(i)
except StopIteration:
break
```
### `Генераторы`
- Генераторы, как и итераторы, предназначены для итерирования по коллекции, но устроены несколько иначе
- Они определяются с помощью функций с оператором `yield` или генераторов списков, а не вызовов `iter()` и `next()`
- В генераторе есть внутреннее изменяемое состояние в виде локальных переменных, которое он хранит автоматически
- Генератор - более простой способ создания собственного итератора, чем его прямое определение
- Все генераторы являются итераторами, но не наоборот<br><br>
- Примеры функций-генераторов:
- `zip`
- `enumerate`
- `reversed`
- `map`
- `filter`
### `Ключевое слово yield`
- `yield` - это слово, по смыслу похожее на `return`<br><br>
- Но используется в функциях, возвращающих генераторы<br><br>
- При вызове такой функции тело не выполняется, функция только возвращает генератор<br><br>
- В первых запуск функция будет выполняться от начала и до `yield`<br><br>
- После выхода состояние функции сохраняется<br><br>
- На следующий вызов будет проводиться итерация цикла и возвращаться следующее значение<br><br>
- И так далее, пока не кончится цикл каждого `yield` в теле функции<br><br>
- После этого генератор станет пустым
### `Пример генератора`
```
def my_range(n):
yield 'You really want to run this generator?'
i = -1
while i < n:
i += 1
yield i
gen = my_range(3)
while True:
try:
print(next(gen), end=' ')
except StopIteration: # we want to catch this type of exceptions
break
for e in my_range(3):
print(e, end=' ')
```
### `Особенность range`
`range` не является генератором, хотя и похож, поскольку не хранит всю последовательность
```
print('__next__' in dir(zip([], [])))
print('__next__' in dir(range(3)))
```
Полезные особенности:
- объекты `range` неизменяемые (могут быть ключами словаря)
- имеют полезные атрибуты (`len`, `index`, `__getitem__`)
- по ним можно итерироваться многократно
### `Модуль itetools`
- Модуль представляет собой набор инструментов для работы с итераторами и последовательностями<br><br>
- Содержит три основных типа итераторов:<br><br>
- бесконечные итераторы
- конечные итераторы
- комбинаторные итераторы<br><br>
- Позволяет эффективно решать небольшие задачи вида:<br><br>
- итерирование по бесконечному потоку
- слияние в один список вложенных списков
- генерация комбинаторного перебора сочетаний элементов последовательности
- аккумуляция и агрегация данных внутри последовательности
### `Модуль itetools: примеры`
```
from itertools import count
for i in count(start=0):
print(i, end=' ')
if i == 5:
break
from itertools import cycle
count = 0
for item in cycle('XYZ'):
if count > 4:
break
print(item, end=' ')
count += 1
```
### `Модуль itetools: примеры`
```
from itertools import accumulate
for i in accumulate(range(1, 5), lambda x, y: x * y):
print(i)
from itertools import chain
for i in chain([1, 2], [3], [4]):
print(i)
```
### `Модуль itetools: примеры`
```
from itertools import groupby
vehicles = [('Ford', 'Taurus'), ('Dodge', 'Durango'),
('Chevrolet', 'Cobalt'), ('Ford', 'F150'),
('Dodge', 'Charger'), ('Ford', 'GT')]
sorted_vehicles = sorted(vehicles)
for key, group in groupby(sorted_vehicles, lambda x: x[0]):
for maker, model in group:
print('{model} is made by {maker}'.format(model=model, maker=maker))
print ("**** END OF THE GROUP ***\n")
```
## `Спасибо за внимание!`
| github_jupyter |
# Visualizing and Analyzing Jigsaw
```
import pandas as pd
import re
import numpy as np
```
In the previous section, we explored how to generate topics from a textual dataset using LDA. But how can this be used as an application?
Therefore, in this section, we will look into the possible ways to read the topics as well as understand how it can be used.
We will now import the preloaded data of the LDA result that was achieved in the previous section.
```
df = pd.read_csv("https://raw.githubusercontent.com/dudaspm/LDA_Bias_Data/main/topics.csv")
df.head()
```
We will visualize these results to understand what major themes are present in them.
```
%%html
<iframe src='https://flo.uri.sh/story/941631/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941631/?utm_source=embed&utm_campaign=story/941631' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
```
### An Overview of the analysis
From the above visualization, an anomaly that we come across is that the dataset we are examining is supposed to be related to people with physical, mental and learning disability. But unfortunately based on the topics that were extracted, we notice just a small subset of words that are related to this topic.
Topic 2 have words that addresses themes related to what we were expecting the dataset to have. But the major theme that was noticed in the Top 5 topics are mainly terms that are political.
(The Top 10 topics show themes related to Religion as well, which is quite interesting.)
LDA hence helped in understanding what the conversations the dataset consisted.
From the word collection, we also notice that there were certain words such as \'kill' that can be categorized as \'Toxic'\. To analyse this more, we can classify each word based on the fact that it can be categorized wi by an NLP classifier.
To demonstrate an example of a toxic analysis framework, the below code shows the working of the Unitary library in python.{cite}`Detoxify`
This library provides a toxicity score (from a scale of 0 to 1) for the sentece that is passed through it.
```
headers = {"Authorization": f"Bearer api_ZtUEFtMRVhSLdyTNrRAmpxXgMAxZJpKLQb"}
```
To get access to this software, you will need to get an API KEY at https://huggingface.co/unitary/toxic-bert
Here is an example of what this would look like.
```python
headers = {"Authorization": f"Bearer api_XXXXXXXXXXXXXXXXXXXXXXXXXXX"}
```
```
import requests
API_URL = "https://api-inference.huggingface.co/models/unitary/toxic-bert"
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
query({"inputs": "addict"})
```
You can input words or sentences in \<insert word here>, in the code, to look at the results that are generated through this.
This example can provide an idea as to how ML can be used for toxicity analysis.
```
query({"inputs": "<insert word here>"})
%%html
<iframe src='https://flo.uri.sh/story/941681/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941681/?utm_source=embed&utm_campaign=story/941681' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
```
#### The Bias
The visualization shows how contextually toxic words are derived as important words within various topics related to this dataset. This can lead to any Natural Language Processing kernel learning this dataset to provide skewed analysis for the population in consideration, i.e. people with mental, physical and learning disability. This can lead to very discriminatory classifications.
##### An Example
To illustrate the impact better, we will be taking the most associated words to the word 'mental' from the results. Below is a network graph that shows the commonly associated words. It is seen that words such as 'Kill' and 'Gun' appear with the closest association. This can lead to the machine contextualizing the word 'mental' to be associated with such words.
```
%%html
<iframe src='https://flo.uri.sh/visualisation/6867000/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6867000/?utm_source=embed&utm_campaign=visualisation/6867000' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
```
It is hence important to be aware of the dataset that is being used to analyse a specific population. With LDA, we were able to understand that this dataset cannot be used as a good representation of the disabled community. To bring about a movement of unbiased AI, we need to perform such preliminary analysis and more, to not cause unintended descrimination.
## The Dashboard
Below is the complete data visaulization dashboard of the topic analysis. Feel feel to experiment and compare various labels to your liking.
```
%%html
<iframe src='https://flo.uri.sh/visualisation/6856937/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6856937/?utm_source=embed&utm_campaign=visualisation/6856937' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
```
## Thank you!
We thank you for your time!
| github_jupyter |
```
%pylab inline
import re
from pathlib import Path
import pandas as pd
import seaborn as sns
datdir = Path('data')
figdir = Path('plots')
figdir.mkdir(exist_ok=True)
mpl.rcParams.update({'figure.figsize': (2.5,1.75), 'figure.dpi': 300,
'axes.spines.right': False, 'axes.spines.top': False,
'axes.titlesize': 10, 'axes.labelsize': 10,
'legend.fontsize': 10, 'legend.title_fontsize': 10,
'xtick.labelsize': 8, 'ytick.labelsize': 8,
'font.family': 'sans-serif', 'font.sans-serif': ['Arial'],
'svg.fonttype': 'none', 'lines.solid_capstyle': 'round'})
```
# Figure 1 - Overview
```
df = pd.read_csv(datdir / 'fig_1.csv')
scores = df[list(map(str, range(20)))].values
selected = ~np.isnan(df['Selected'].values)
gens_sel = np.nonzero(selected)[0]
scores_sel = np.array([np.max(scores[g]) for g in gens_sel])
ims_sel = [plt.imread(str(datdir / 'images' / 'overview' / f'gen{gen:03d}.png'))
for gen in gens_sel]
ims_sel = np.array(ims_sel)
print('gens to visualize:', gens_sel)
with np.printoptions(precision=2, suppress=True):
print('corresponding scores:', scores_sel)
print('ims_sel shape:', ims_sel.shape)
c0 = array((255,92,0)) / 255 # highlight color
figure(figsize=(2.5, 0.8), dpi=150)
plot(scores.mean(1))
xlim(0, 500)
ylim(bottom=0)
xticks((250,500))
yticks((0,50))
gca().set_xticks(np.nonzero(selected)[0], minor=True)
gca().tick_params(axis='x', which='minor', colors=c0, width=1)
title('CaffeNet layer fc8, unit 1')
xlabel('Generation')
ylabel('Activation')
savefig(figdir / f'overview-evo_scores.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'overview-evo_scores.svg', dpi=300, bbox_inches='tight')
def make_canvas(ims, nrows=None, ncols=None, margin=15, margin_colors=None):
if margin_colors is not None:
assert len(ims) == len(margin_colors)
if ncols is None:
assert nrows is not None
ncols = int(np.ceil(len(ims) / nrows))
else:
nrows = int(np.ceil(len(ims) / ncols))
im0 = ims.__iter__().__next__()
imsize = im0.shape[0]
size = imsize + margin
w = margin + size * ncols
h = margin + size * nrows
canvas = np.ones((h, w, 3), dtype=im0.dtype)
for i, im in enumerate(ims):
ih = i // ncols
iw = i % ncols
if len(im.shape) > 2 and im.shape[-1] == 4:
im = im[..., :3]
if margin_colors is not None:
canvas[size * ih:size * (ih + 1) + margin, size * iw:size * (iw + 1) + margin] = margin_colors[i]
canvas[margin + size * ih:margin + size * ih + imsize, margin + size * iw:margin + size * iw + imsize] = im
return canvas
scores_sel_max = scores_sel.max()
margin_colors = np.array([(s / scores_sel_max * c0) for s in scores_sel])
for i, im_idc in enumerate((slice(0,5), slice(5,None))):
canvas = make_canvas(ims_sel[im_idc], nrows=1,
margin_colors=margin_colors[im_idc])
figure(dpi=150)
imshow(canvas)
# turn off axis decorators to make tight plot
ax = gca()
ax.tick_params(labelcolor='none', bottom=False, left=False, right=False)
ax.set_frame_on(False)
for sp in ax.spines.values():
sp.set_visible(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
plt.imsave(figdir / f'overview-evo_ims_{i}.png', canvas)
```
# Define Custom Violinplot
```
def violinplot2(data=None, x=None, y=None, hue=None,
palette=None, linewidth=1, orient=None,
order=None, hue_order=None, x_disp=None,
palette_per_violin=None, hline_at_1=True,
legend_palette=None, legend_kwargs=None,
width=0.7, control_width=0.8, control_y=None,
hues_share_control=False,
ax=None, **kwargs):
"""
width: width of a group of violins ("hues") as fraction of between-group distance
contorl_width: width of a group of bars (control) as fraction of hue width
"""
if order is None:
n_groups = len(set(data[x])) if orient != 'h' else len(set(data[y]))
else:
n_groups = len(order)
extra_plot_handles = []
if ax is None:
ax = plt.gca()
if orient == 'h':
fill_between = ax.fill_betweenx
plot = ax.vlines
else:
fill_between = ax.fill_between
plot = ax.hlines
############ drawing ############
if not isinstance(y, str) and hasattr(y, '__iter__'):
ys = y
else:
ys = (y,)
for y in ys:
ax = sns.violinplot(data=data, x=x, y=y, hue=hue, ax=ax,
palette=palette, linewidth=linewidth, orient=orient,
width=width, order=order, hue_order=hue_order, **kwargs)
if legend_kwargs is not None:
lgnd = plt.legend(**legend_kwargs)
else:
lgnd = None
if hline_at_1:
hdl = plot(1, -0.45, n_groups-0.55, linestyle='--', linewidth=.75, zorder=-3)
extra_plot_handles.append(hdl)
############ drawing ############
############ styling ############
if orient != 'h':
ax.xaxis.set_ticks_position('none')
if x_disp is not None:
ax.set_xticklabels(x_disp)
# enlarge the circle for median
median_marks = [o for o in ax.get_children() if isinstance(o, matplotlib.collections.PathCollection)]
for o in median_marks:
o.set_sizes([10,])
# recolor the violins
violins = np.array([o for o in ax.get_children() if isinstance(o, matplotlib.collections.PolyCollection)])
violins = violins[np.argsort([int(v.get_label().replace('_collection','')) for v in violins])]
for i, o in enumerate(violins):
if palette_per_violin is not None:
i %= len(palette_per_violin)
c = palette_per_violin[i]
if len(c) == 2:
o.set_facecolor(c[0])
o.set_edgecolor(c[1])
else:
o.set_facecolor(c)
o.set_edgecolor('none')
else:
o.set_edgecolor('none')
# recolor the legend patches
if lgnd is not None:
for v in (legend_palette, palette_per_violin, palette):
if v is not None:
legend_palette = v
break
if legend_palette is not None:
for o, c in zip(lgnd.get_patches(), legend_palette):
o.set_facecolor(c)
o.set_edgecolor('none')
############ styling ############
############ control ############
# done last to not interfere with coloring violins
if control_y is not None:
assert control_y in df.columns
assert hue is not None and order is not None and hue_order is not None
nhues = len(hue_order)
vw = width # width per control (long)
if not hues_share_control:
vw /= nhues
cw = vw * control_width # width per control (short)
ctl_hdl = None
for i, xval in enumerate(order):
if not hues_share_control:
for j, hval in enumerate(hue_order):
df_ = df[(df[x] == xval) & (df[hue] == hval)]
if not len(df_):
continue
lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75))
xs_qtl = i + vw * (-nhues/2 + 1/2 + j) + cw/2 * np.array((-1,1))
xs_med = i + vw * (-nhues/2 + j) + vw * np.array((0,1))
ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2) # upper & lower quartiles
plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1) # median
else:
df_ = df[(df[x] == xval)]
if not len(df_):
continue
lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75))
xs_qtl = i + cw/2 * np.array((-1,1))
xs_med = i + vw/2 * np.array((-1,1))
ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2)
plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1)
extra_plot_handles.append(ctl_hdl)
############ control ############
return n_groups, ax, lgnd, extra_plot_handles
def default_ax_lims(ax, n_groups=None, orient=None):
if orient == 'h':
ax.set_xticks((0,1,2,3))
ax.set_xlim(-0.25, 3.5)
else:
if n_groups is not None:
ax.set_xlim(-0.65, n_groups-0.35)
ax.set_yticks((0,1,2,3))
ax.set_ylim(-0.25, 3.5)
def rotate_xticklabels(ax, rotation=10, pad=5):
for i, tick in enumerate(ax.xaxis.get_major_ticks()):
if tick.label.get_text() == 'none':
tick.set_visible(False)
tick.label.set(va='top', ha='center', rotation=rotation, rotation_mode='anchor')
tick.set_pad(pad)
```
# Figure 3 - Compare Target Nets, Layers
```
df = pd.read_csv(datdir/'fig_2.csv')
df = df[~np.isnan(df['Rel_act'])] # remove invalid data
df.head()
nets = ('caffenet', 'resnet-152-v2', 'resnet-269-v2', 'inception-v3', 'inception-v4', 'inception-resnet-v2', 'placesCNN')
layers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'),
'resnet-152-v2': ('res15_eletwise', 'res25_eletwise', 'res35_eletwise', 'classifier'),
'resnet-269-v2': ('res25_eletwise', 'res45_eletwise', 'res60_eletwise', 'classifier'),
'inception-v3': ('pool2_3x3_s2', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),
'inception-v4': ('inception_stem3', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),
'inception-resnet-v2': ('stem_concat', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),
'placesCNN': ('conv2', 'conv4', 'fc6', 'fc8')}
get_layer_level = lambda r: ('Early', 'Middle', 'Late', 'Output')[layers[r[1]['Classifier']].index(r[1]['Layer'])]
df['Layer_level'] = list(map(get_layer_level, df.iterrows()))
x_disp = ('CaffeNet', 'ResNet-152-v2', 'ResNet-269-v2', 'Inception-v3', 'Inception-v4', 'Inception-ResNet-v2', 'PlacesCNN')
palette = get_cmap('Blues')(np.linspace(0.3,0.8,4))
fig = figure(figsize=(6.3,2.5), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='Classifier', y='Rel_act', hue='Layer_level', cut=0,
order=nets, hue_order=('Early', 'Middle', 'Late', 'Output'), x_disp=x_disp,
legend_kwargs=dict(title='Evolved,\ntarget layer', loc='upper left', bbox_to_anchor=(1,1.05)),
palette_per_violin=palette, control_y='Rel_exp_max')
default_ax_lims(ax, n_groups)
rotate_xticklabels(ax)
ylabel('Relative activation')
xlabel('Target architecture')
# another legend
legend(handles=hdls, labels=['Overall', 'In 10k'], title='ImageNet max',
loc='upper left', bbox_to_anchor=(1,0.4))
ax.add_artist(lgnd)
savefig(figdir / f'nets.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'nets.svg', dpi=300, bbox_inches='tight')
```
# Figure 5 - Compare Generators
## Compare representation "depth"
```
df = pd.read_csv(datdir / 'fig_5-repr_depth.csv')
df = df[~np.isnan(df['Rel_act'])]
df['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values]
df.head()
nets = ('caffenet', 'inception-resnet-v2')
layers = {'caffenet': ('conv2', 'fc6', 'fc8'),
'inception-resnet-v2': ('classifier',)}
generators = ('raw_pixel', 'deepsim-norm1', 'deepsim-norm2', 'deepsim-conv3',
'deepsim-conv4', 'deepsim-pool5', 'deepsim-fc6', 'deepsim-fc7', 'deepsim-fc8')
xorder = ('caffenet, conv2', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier')
x_disp = ('CaffeNet, conv2', 'CaffeNet, fc6', 'CaffeNet, fc8', 'Inception-ResNet-v2,\nclassifier')
lbl_disp = ('Raw pixel',) + tuple(v.replace('deepsim', 'DeePSiM') for v in generators[1:])
palette = ([[0.75, 0.75, 0.75]] + # raw pixel
sns.husl_palette(len(generators)-1, h=0.05, l=0.65)) # deepsim 1--8
fig = figure(figsize=(5.6,2.4), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='Classifier, layer', y='Rel_act', hue='Generator',
cut=0, linewidth=.75, width=0.9, control_width=0.9,
order=xorder, hue_order=generators, x_disp=x_disp,
legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(1,1.05)),
palette=palette, control_y='Rel_exp_max', hues_share_control=True)
default_ax_lims(ax, n_groups)
ylabel('Relative activation')
xlabel('Target layer')
# change legend label text
for txt, lbl in zip(lgnd.get_texts(), lbl_disp):
txt.set_text(lbl)
savefig(figdir / f'generators.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'generators.svg', dpi=300, bbox_inches='tight')
```
## Compare training dataset
```
df = pd.read_csv(datdir / 'fig_5-training_set.csv')
df = df[~np.isnan(df['Rel_act'])]
df['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values]
df.head()
nets = ('caffenet', 'inception-resnet-v2')
cs = ('caffenet', 'placesCNN', 'inception-resnet-v2')
layers = {c: ('conv2', 'conv4', 'fc6', 'fc8') for c in cs}
layers['inception-resnet-v2'] = ('classifier',)
gs = ('deepsim-fc6', 'deepsim-fc6-places365')
cls = ('caffenet, conv2', 'caffenet, conv4', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier',
'placesCNN, conv2', 'placesCNN, conv4', 'placesCNN, fc6', 'placesCNN, fc8')
cls_spaced = cls[:5] + ('none',) + cls[5:]
x_disp = tuple(f'CaffeNet, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8')) + \
('Inception-ResNet-v2,\nclassifier', 'none') + \
tuple(f'PlacesCNN, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8'))
lbl_disp = ('DeePSiM-fc6', 'DeePSiM-fc6-Places365')
palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))
for main_c in ('Blues', 'Oranges')]
palette = list(np.array(palette).transpose(1,0,2).reshape(-1, 4))
palette = palette + palette[-2:] + palette
fig = figure(figsize=(5.15,1.8), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='Classifier, layer', y='Rel_act', hue='Generator',
cut=0, split=True, inner='quartile',
order=cls_spaced, hue_order=gs, x_disp=x_disp,
legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(.97,1.05)),
palette_per_violin=palette, legend_palette=palette[4:],
control_y='Rel_exp_max', hues_share_control=True)
rotate_xticklabels(ax, rotation=15, pad=10)
ylabel('Relative activation')
xlabel('Target layer')
# change legend label text
for txt, lbl in zip(lgnd.get_texts(), lbl_disp):
txt.set_text(lbl)
savefig(figdir / f'generators2.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'generators2.svg', dpi=300, bbox_inches='tight')
```
# Figure 4 - Compare Inits
```
layers = ('conv2', 'conv4', 'fc6', 'fc8')
layers_disp = tuple(v.capitalize() for v in layers)
```
## Rand inits, fraction change
```
df = pd.read_csv(datdir/'fig_4-rand_init.csv').set_index(['Layer', 'Unit', 'Init_seed'])
df = (df.drop(0, level='Init_seed') - df.xs(0, level='Init_seed')).mean(axis=0,level=('Layer','Unit'))
df = df.rename({'Rel_act': 'Fraction change'}, axis=1)
df = df.reset_index()
df.head()
palette = get_cmap('Blues')(np.linspace(0.2,0.9,6)[1:-1])
fig = figure(figsize=(1.75,1.5), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='Layer', y='Fraction change',
cut=0, width=0.9, palette=palette,
order=layers, x_disp=layers_disp, hline_at_1=False)
xlabel('Target CaffeNet layer')
ylim(-0.35, 0.35)
yticks((-0.25,0,0.25))
ax.set_yticklabels([f'{t:.2f}' for t in (-0.25,0,0.25)])
ax.set_yticks(np.arange(-0.3,0.30,0.05), minor=True)
savefig(figdir / f'inits-change.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'inits-change.svg', dpi=300, bbox_inches='tight')
```
## Rand inits, interpolation
```
df = pd.read_csv(datdir/'fig_4-rand_init_interp.csv').set_index(['Layer', 'Unit', 'Seed_i0', 'Seed_i1'])
df = df.mean(axis=0,level=('Layer','Unit'))
df2 = pd.read_csv(datdir/'fig_4-rand_init_interp-2.csv').set_index(['Layer', 'Unit']) # control conditions
df2_normed = df2.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0)
df_normed = df.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0)
df_normed.head()
fig, axs = subplots(1, 2, figsize=(3.5,1.5), dpi=150)
subplots_adjust(wspace=0.5)
interp_xs = np.array([float(i[i.rfind('_')+1:]) for i in df.columns])
for ax, df_ in zip(axs, (df, df_normed)):
df_mean = df_.mean(axis=0, level='Layer')
df_std = df_.std(axis=0, level='Layer')
for l, ld, c in zip(layers, layers_disp, palette):
m = df_mean.loc[l].values
s = df_std.loc[l].values
ax.plot(interp_xs, m, c=c, label=ld)
ax.fill_between(interp_xs, m-s, m+s, fc=c, ec='none', alpha=0.1)
# plot control
xs2 = (interp_xs.min(), interp_xs.max())
axs[0].hlines(1, *xs2, linestyle='--', linewidth=1)
for l, c in zip(layers, palette):
# left subplot: relative activation
df_ = df2.loc[l]
mq = np.nanmedian(df_['Rel_ImNet_median_act'].values)
axs[0].plot(xs2, (mq, mq), color=c, linewidth=1.15, zorder=-2)
# right subplot: normalized to endpoints
df_ = df2_normed.loc[l]
for k, ls, lw in zip(('Rel_exp_max', 'Rel_ImNet_median_act'), ('--','-'), (1, 1.15)):
mq = np.nanmedian(df_[k].values)
axs[1].plot(xs2, (mq, mq), color=c, ls=ls, linewidth=lw, zorder=-2)
axs[0].set_yticks((0, 1, 2))
axs[1].set_yticks((0, 0.5, 1))
axs[0].set_ylabel('Relative activation')
axs[1].set_ylabel('Normalized activation')
for ax in axs:
ax.set_xlabel('Interpolation location')
lgnd = axs[-1].legend(loc='upper left', bbox_to_anchor=(1.05, 1.05))
legend(handles=[Line2D([0], [0], color='k', lw=1, ls='--', label='Max'),
Line2D([0], [0], color='k', lw=1.15, label='Median')],
title='ImageNet ref.',
loc='upper left', bbox_to_anchor=(1.05,0.3))
ax.add_artist(lgnd)
savefig(figdir / f'inits-interp.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'inits-interp.svg', dpi=300, bbox_inches='tight')
```
## Per-neuron inits
```
df = pd.read_csv(datdir/'fig_4-per_neuron_init.csv')
df.head()
hue_order = ('rand', 'none', 'worst_opt', 'mid_opt', 'best_opt',
'worst_ivt', 'mid_ivt', 'best_ivt')
palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))
for main_c in ('Blues', 'Greens', 'Purples')]
palette = np.concatenate([[
palette[0][i]] * 1 + [palette[1][i]] * 3 + [palette[2][i]] * 3
for i in range(4)])
palette = tuple(palette) + tuple(('none', c) for c in palette)
fig = figure(figsize=(6.3,2), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='Layer', y=('Rel_act', 'Rel_act_init'), hue='Init_name', cut=0,
order=layers, hue_order=hue_order, x_disp=x_disp,
palette_per_violin=palette)
ylabel('Relative activation')
ylabel('Target CaffeNet layer')
# create custom legends
# for init methods
legend_elements = [
matplotlib.patches.Patch(facecolor=palette[14+3*i], edgecolor='none', label=l)
for i, l in enumerate(('Random', 'Opt', 'Ivt'))]
lgnd1 = legend(handles=legend_elements, title='Init. method',
loc='upper left', bbox_to_anchor=(1,1.05))
# for generation condition
legend_elements = [
matplotlib.patches.Patch(facecolor='gray', edgecolor='none', label='Final'),
matplotlib.patches.Patch(facecolor='none', edgecolor='gray', label='Initial')]
ax.legend(handles=legend_elements, title='Generation',
loc='upper left', bbox_to_anchor=(1,.45))
ax.add_artist(lgnd1)
savefig(figdir / f'inits-per_neuron.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'inits-per_neuron.svg', dpi=300, bbox_inches='tight')
```
# Figure 6 - Compare Optimizers & Stoch Scales
## Compare optimizers
```
df = pd.read_csv(datdir/'fig_6-optimizers.csv')
df['OCL'] = ['_'.join(v) for v in df[['Optimizer','Classifier','Layer']].values]
df.head()
opts = ('genetic', 'FDGD', 'NES')
layers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'),
'inception-resnet-v2': ('classifier',)}
cls = [(c, l) for c in layers for l in layers[c]]
xorder = tuple(f'{opt}_{c}_{l}' for c in layers for l in layers[c]
for opt in (opts + ('none',)))[:-1]
x_disp = ('CaffeNet, conv2', 'CaffeNet, conv4', 'CaffeNet, fc6', 'CaffeNet, fc8',
'Inception-ResNet-v2,\nclassifier')
opts_disp = ('Genetic', 'FDGD', 'NES')
palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))
for main_c in ('Blues', 'Oranges', 'Greens')]
palette = np.concatenate([
np.concatenate([[palette[j][i], palette[j][i]/2+0.5] for j in range(3)])
for i in (0,1,2,3,3)])
fig = figure(figsize=(6.75,2.75), dpi=150)
n_groups, ax, lgnd, hdls = violinplot2(
data=df, x='OCL', y='Rel_act', hue='Noisy',
cut=0, inner='quartiles', split=True, width=1,
order=xorder, palette_per_violin=palette)
default_ax_lims(ax, n_groups)
xticks(np.arange(1,20,4), labels=x_disp)
xlabel('Target layer', labelpad=0)
ylabel('Relative activation')
# create custom legends
# for optimizers
legend_patches = [matplotlib.patches.Patch(facecolor=palette[i], edgecolor='none', label=opt)
for i, opt in zip(range(12,18,2), opts_disp)]
lgnd1 = legend(handles=legend_patches, title='Optimization alg.',
loc='upper left', bbox_to_anchor=(0,1))
# for noise condition
legend_patches = [matplotlib.patches.Patch(facecolor=(0.5,0.5,0.5), edgecolor='none', label='Noiseless'),
matplotlib.patches.Patch(facecolor=(0.8,0.8,0.8), edgecolor='none', label='Noisy')]
legend(handles=legend_patches, loc='upper right', bbox_to_anchor=(1,1))
ax.add_artist(lgnd1)
# plot control
group_width_ = 4
for i, cl in enumerate(cls):
i = i * group_width_ + 1
df_ = df[(df['Classifier'] == cl[0]) & (df['Layer'] == cl[1])]
lq, mq, uq = np.nanpercentile(df_['Rel_exp_max'].values, (25, 50, 75))
xs_qtl = i+np.array((-1,1))*group_width_*0.7/2
xs_med = i+np.array((-1,1))*group_width_*0.75/2
fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2)
plot(xs_med, (mq, mq), color=(0.5,0.5,0.5), linewidth=1.15, zorder=-1)
savefig(figdir / f'optimizers.png', dpi=300, bbox_inches='tight')
savefig(figdir / f'optimizers.svg', dpi=300, bbox_inches='tight')
```
## Compare varying amounts of noise
```
df = pd.read_csv(datdir/'fig_6-stoch_scales.csv')
df = df[~np.isnan(df['Rel_noise'])]
df['Stoch_scale_plot'] = [str(int(v)) if ~np.isnan(v) else 'None' for v in df['Stoch_scale']]
df.head()
layers = ('conv2', 'conv4', 'fc6', 'fc8')
stoch_scales = list(map(str, (5, 10, 20, 50, 75, 100, 250))) + ['None']
stoch_scales_disp = stoch_scales[:-1] + ['No\nnoise']
stat_keys = ('Self_correlation', 'Rel_noise', 'SNR')
stat_keys_disp = ('Self correlation', 'Stdev. : mean ratio', 'Signal-to-noise ratio')
palette = [get_cmap('Blues')(np.linspace(0.3,0.8,4))[2]] # to match previous color
# calculate noise statstics and define their formatting
format_frac = lambda v: ('%.2f' % v)[1:] if (0 < v < 1) else '0' if v == 0 else str(v)
def format_sci(v):
v = '%.0e' % v
if v == 'inf':
return v
m, s = v.split('e')
s = int(s)
if s:
if False: #s > 1:
m = re.split('0+$', m)[0]
m += 'e%d' % s
else:
m = str(int((float(m) * np.power(10, s))))
return m
fmts = (format_frac, format_frac, format_sci)
byl_byss_stats = {k: {} for k in stat_keys}
for l in layers:
df_ = df[df['Layer'] == l]
stats = {k: [] for k in stat_keys}
for ss in stoch_scales:
df__ = df_[df_['Stoch_scale_plot'] == ss]
for k in stat_keys:
stats[k].append(np.median(df__[k]))
for k in stats.keys():
byl_byss_stats[k][l] = stats[k]
fig, axs = subplots(1, 4, figsize=(5.25, 2), dpi=150, sharex=True, sharey=True, squeeze=False)
axs = axs.flatten()
subplots_adjust(wspace=0.05)
for l, ax in zip(layers, axs):
df_ = df[df['Layer'] == l]
n_groups, ax, lgnd, hdls = violinplot2(
data=df_, x='Rel_act', y='Stoch_scale_plot', orient='h',
cut=0, width=.85, scale='width',
palette=palette, ax=ax)
ax.set_title(f'CaffeNet, {l}', fontsize=8)
default_ax_lims(ax, n_groups, orient='h')
ax.set_xlabel(None)
# append more y-axes to last axis
pars = [twinx(ax) for _ in range(len(stat_keys))]
ylim_ = ax.get_ylim()
for i, (par, k, fmt, k_disp) in enumerate(zip(pars, stat_keys, fmts, stat_keys_disp)):
par.set_frame_on(True)
par.patch.set_visible(False)
par.spines['right'].set_visible(True)
par.yaxis.set_ticks_position('right')
par.yaxis.set_label_position('right')
par.yaxis.labelpad = 2
par.spines['right'].set_position(('axes', 1+.6*i))
par.set_ylabel(k_disp)
par.set_yticks(range(len(stoch_scales)))
par.set_yticklabels(map(fmt, byl_byss_stats[k][l]))
par.set_ylim(ylim_)
axs[0].set_ylabel('Expected max firing rate, spks')
axs[0].set_yticklabels(stoch_scales_disp)
for ax in axs[1:]:
ax.set_ylabel(None)
ax.yaxis.set_tick_params(left=False)
# joint
ax = fig.add_subplot(111, frameon=False)
ax.tick_params(labelcolor='none', bottom=False, left=False, right=False)
ax.set_frame_on(False)
ax.set_xlabel('Relative activation')
savefig(figdir / 'stoch_scales.png', dpi=300, bbox_inches='tight')
savefig(figdir / 'stoch_scales.svg', dpi=300, bbox_inches='tight')
```
| github_jupyter |
```
#uncomment this to install the library
# !pip3 install pygeohash
```
## Libraries and auxiliary functions
```
#load the libraries
from time import sleep
from kafka import KafkaConsumer
import datetime as dt
import pygeohash as pgh
#fuctions to check the location based on the geo hash (precision =5)
#function to check location between 2 data
def close_location (data1,data2):
print("checking location...of sender",data1.get("id")," and sender" , data2.get("id"))
#with the precision =5 , we find the location that close together with the radius around 2.4km
if data1.get("geohash")== data2.get("geohash"):
print("=>>>>>sender",str(data1.get("id")),"location near ", "sender",str(data2.get("id")),"location")
else:
print('>>>not close together<<<')
#function to check location between the joined data and another data (e.g hotspot data)
def close_location_2 (data1,data2):
print("checking location...of joined data id:",data1.get("id")," and sender" , data2.get("id"))
#with the precision =5 , we find the location that close together with the radius 2.4km
if data1.get("geohash")== data2.get("geohash"):
print("=>>>> location",str(data1.get("geohash")),"location near ", str(data2.get("geohash")),"location")
else:
print('>>>not close together<<<')
# check location of 2 climate data stored in the list
def close_location_in_list(a_list):
print('check 2 climate location data')
data_1 = a_list[0]
data_2 = a_list[1]
close_location (data_1,data_2)
#auxilary function to handle the average and join of the json file
#function to merge satellite data
def merge_sat(data1,data2):
result ={}
result["_id"] = data1.get("_id") # take satellite _id ,we will store this joined data to the hotspot collection
result["created_time"] = data1.get("created_time")
#average the result of the location
result['surface_temperature_celsius'] = (float(data1.get("surface_temperature_celsius"))+float(data2.get("surface_temperature_celsius")))/2
result["confidence"] = (float(data1.get("confidence"))+float(data2.get("confidence")))/2
#reassign the location like the initial data structure
result['geohash'] = data2.get('geohash')
result["location"] = data1.get("location")
return result
# function to join climate data and satellite data
def join_data_cli_sat(climData,satData):
result={}
#get location and id of the join data
result["_id"] = climData.get("_id") # take climate _id ,we will store this joined data to the climate collection
result['geohash'] = climData.get('geohash')
result["location"] = climData.get("location")
result["created_time"] = climData.get("created_time")
#get climate data
result["air_temperature_celsius"] = climData.get("air_temperature_celsius")
result["relative_humidity"] = climData.get("relative_humidity")
result["max_wind_speed"] = climData.get("max_wind_speed")
result["windspeed_knots"] = climData.get("windspeed_knots")
result["precipitation"] = climData.get("precipitation")
#get satellite data
result["surface_temperature_celsius"] = satData.get("surface_temperature_celsius")
result["confidence"] = satData.get("confidence")
result["hotspots"] = satData.get("_id") #reference to the hotspot data like in the task A_B
return result
```
## Streaming Application
```
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 pyspark-shell'
import sys
import time
import json
from pymongo import MongoClient
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
def sendDataToDB(iter):
client = MongoClient()
db = client.fit5148_assignment_db
# MongoDB design
sat_col = db.hotspot #to store satellite data and joined satellite data
# to store the join between climate and satellite
clim_col = db.climate #to store the climate data
#list of senders per iter
sender = []
#variable to store the data from 3 unique senders per iter
climList = []
satData_2 = {}
satData_3 = {}
##################################### PARSING THE DATA FROM SENDERS PER ITER###########################################
for record in iter:
sender.append(record[0])
data_id = json.loads(record[1])
data = data_id.get('data')
if record[0] == "sender_2" : #parse AQUA satelite data
#main data
#add "AQUA" string to the "_id" to handle the case when 2 satellite data come at the same time
#to make sure the incomming data from AQUA at a specific time is unique
satData_2["_id"] = "AQUA" +str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S"))
satData_2["id"] = data_id.get("sender_id") #unique sender_id
#use datetime as ISO format for readable in mongoDB
satData_2["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")
# parse other data
satData_2["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))}
satData_2["surface_temperature_celsius"] = float(data.get("surface_temp"))
satData_2["confidence"] = float(data.get("confidence"))
geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5)
satData_2["geohash"] = geohash #unique_location
if record[0] == "sender_3": #parse TERRA satelite data
#main data
#add "TERRA" string to the "_id" to handle the case when 2 satellite data come at the same time
#to make sure the incomming data for TERRA at a specific time is unique
satData_3["_id"] = "TERRA" +str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S"))
satData_3["id"] = data_id.get("sender_id") #unique sender_id
#use datetime as ISO format for readable in mongoDB
satData_3["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")
# parse other data
satData_3["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))}
satData_3["surface_temperature_celsius"] = float(data.get("surface_temp"))
satData_3["confidence"] = float(data.get("confidence"))
geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5)
satData_3["geohash"] = geohash #unique_location
if record[0] == "sender_1": #parse climate data
climData = {}
#main data
#add "CLIM" string to the "_id" to handle to make sure the incomming data for
#climate at a specific time is unique
climData["_id"] = "CLIM" + str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S"))
climData["id"] = data_id.get("sender_id") #unique sender_id
#use datetime as ISO format for readable in mongoDB
climData["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")
climData["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))}
climData["air_temperature_celsius"] = float(data.get("air_temp"))
climData["relative_humidity"] = float(data.get("relative_humid"))
climData["max_wind_speed"] = float(data.get("max_wind_speed"))
climData["windspeed_knots"] = float(data.get("windspeed"))
climData["precipitation"] = data.get("prep")
geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5)
climData["geohash"] = geohash
climList.append(climData)
uniq_sender_id = set(sender) #check unique sender for each iter
################################ PERFOMING JOIN AND CHECK LOCATION THEN PUSH TO MONGODB ##################################
####################### Received only from unique one sender
#for climate data, there will be the case with on 2 streams of climate data go throught the app
if len(uniq_sender_id) == 1 and "sender_1" in uniq_sender_id:#store to climate data to mongoDB
print("---------------------received CLIMATE data------------------------")
try:
#find close location in climate data and print out
if len(climList) > 1:
#check 2 climate location data
close_location_in_list(climList)
for data in climList:
clim_col.insert(data)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex)))
# if there is one satellite data (AQUA), there will be no case with 2 same satelite data
if len(uniq_sender_id) == 1 and "sender_2" in uniq_sender_id:#store to climate data to mongoDB
print("---------------------received AQUA data------------------------")
try:
sat_col.insert(satData_2)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex)))
# if there is one satellite data (TERRA) , there will be no case with 2 same satelite data
if len(uniq_sender_id) == 1 and "sender_3" in uniq_sender_id:#store to climate data to mongoDB
print("---------------------received TERRA data------------------------")
try:
sat_col.insert(satData_3)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex)))
########################## Received from 2 unique senders
elif len(sender) == 2 and len(uniq_sender_id) == 2:
print("---------------------received 2 streams------------------------")
#will have 1 case, because there will be at least 1 climate data
#if the consummer received 2, that will be the climat data and one sat data
#or 2 climate data because we assume that there is at least 1 climate data in the stream
try:
for climate in climList:
if len(satData_3)!=0:
#check location
close_location(climate,satData_3)
#check lat lon first!!!
print('---checking TERRA and Climate location---')
if satData_3["location"] == climate["location"]:
print('joining....')
join_cli_sat = join_data_cli_sat(climate,satData_3)
clim_col.insert(join_cli_sat)
sat_col.insert(satData_3)
else:
print('no join')
sat_col.insert(satData_3)
clim_col.insert(climate)
elif len(satData_2)!=0:
#check close location
close_location(climate,satData_2)
print('---checking AQUA and Climate location---')
#check lat lon first!!!
if satData_2["location"] == climate["location"]:
print('joining....')
join_cli_sat = join_data_cli_sat(climate,satData_2)
clim_col.insert(join_cli_sat)
sat_col.insert(satData_2)
else:
print('no join')
sat_col.insert(satData_2)
clim_col.insert(climate)
else: #received only 2 climate data
print('received 2 climate data')
clim_col.insert(climate)
# if we received 2 sattelite data only (rare case, we ran out of climate data)
if len(climList) == 0:
if len(satData_3)!=0 and len(satData_2)!=0:
#check location
close_location(satData_3,satData_2)
print('---checking AQUA and TERRA location---')
if satData_2["location"] == satData_3["location"]:
print('joining....')
sat_data = merge_sat(satData_2,satData_3)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.insert(sat_data)
else:
sat_col.update(satData_3, satData_3, upsert=True)
sat_col.update(satData_2, satData_2, upsert=True)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex))) #exception will occur with empty satelite data
#########################################################Received 3 stream
########################## Received from 2 unique sender
#we assume that there is at least 1 climate data in the stream , so if we have 3 streams of data
# there will be 2 climate data and 1 satelite data because the app process 10 secs batch
# if received 3 streams, there will be 2 climate data and 1 satellite data
if len(sender) == 3:
print("---------------------received 3 streams------------------------")
try:
if len(climList) > 1:
#check 2 climate location data
close_location_in_list(climList)
for climate2 in climList:
if len(satData_3)!=0:
#check location
close_location(climate2,satData_3)
print('---checking TERRA and Climate location---')
if satData_3["location"] == climate2["location"]:
print('joining....')
join_data = join_data_cli_sat(climate2,satData_3)
clim_col.insert(join_data)
sat_col.update(satData_3, satData_3, upsert=True)
else:
print('no join')
clim_col.insert(climate2)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.update(satData_3, satData_3, upsert=True)
elif len(satData_2)!=0:
#check location
close_location(climate2,satData_2)
print('---checking AQUA and Climate location---')
if satData_2["location"] == climate2["location"]:
print('joining....')
join_data = join_data_cli_sat(climate2,satData_2)
clim_col.insert(join_data)
sat_col.update(satData_2, satData_2, upsert=True)
else:
print('no join')
clim_col.insert(climate2)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.update(satData_2, satData_2, upsert=True)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex)))
########################################Received 4 streams of data#################################
# There will be 2 climate data and 2 satellite data from AQUA and TERRA
elif len(sender) ==4 : # 4 will have 2 climate data and 2 sat data
print("---------------------received 4 streams------------------------")
try:
if len(climList) > 1:
#check 2 climate location data
close_location_in_list(climList)
for climate2 in climList:
print('---checking AQUA , TERRA and Climate location---')
#location sat2=sat3=climate
if (satData_2["location"] == satData_3["location"])\
and (satData_2["location"] == climate2["location"]):
print('joining....')
#join 2 satellite data
sat_data = merge_sat(satData_2,satData_3)
sat_col.update(sat_data, sat_data, upsert=True)
#join with the climate file
final_data = join_data_cli_sat(climate2,sat_data)
clim_col.insert(final_data)
#location sat2=sat3
elif (satData_2["location"] == satData_3["location"])\
and (satData_2["location"] != climate2["location"]):
print('joining....')
sat_data = merge_sat(satData_2,satData_3)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.update(sat_data, sat_data, upsert=True)
clim_col.insert(climate2)
#check location
close_location_2(sat_data,climate2)
#location sat2=climate
elif (satData_2["location"] != satData_3["location"])\
and (satData_2["location"] == climate2["location"]):
print('joining....')
join_data = join_data_cli_sat(climate2,satData_2)
clim_col.insert(join_data)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.update(satData_3, satData_3, upsert=True)
sat_col.update(satData_2, satData_2, upsert=True)
#
#check location
close_location_2(join_data,satData_3)
#location sat3 =climate
elif (satData_2["location"] != satData_3["location"])\
and (satData_3["location"] == climate2["location"]):
print('joining....')
join_data = join_data_cli_sat(climate2,satData_3)
clim_col.insert(join_data)
#insert the data into the mongo with handling the exceptions : duplicate
sat_col.update(satData_3, satData_3, upsert=True)
sat_col.update(satData_2, satData_2, upsert=True)
#
#check location
close_location_2(join_data,satData_2)
#if nothing to merge
else:
print('no join')
#check location
close_location(climate2,satData_2)
close_location(climate2,satData_3)
close_location(satData_2,satData_3)
clim_col.insert(climate2)
#insert the data into the mongo with handling the exceptions
sat_col.update(satData_3, satData_3, upsert=True)
sat_col.update(satData_2, satData_2, upsert=True)
except Exception as ex:
print("Exception Occured. Message: {0}".format(str(ex)))
client.close()
################################################ INITIATE THE STREAM ################################################
n_secs = 10 # set batch to 10 seconds
topic = 'TaskC'
conf = SparkConf().setAppName("KafkaStreamProcessor").setMaster("local[2]") #set 2 processors
sc = SparkContext.getOrCreate()
if sc is None:
sc = SparkContext(conf=conf)
sc.setLogLevel("WARN")
ssc = StreamingContext(sc, n_secs)
kafkaStream = KafkaUtils.createDirectStream(ssc, [topic], {
'bootstrap.servers':'localhost:9092',
'group.id':'taskC-group',
'fetch.message.max.bytes':'15728640',
'auto.offset.reset':'largest'})
# Group ID is completely arbitrary
lines= kafkaStream.foreachRDD(lambda rdd: rdd.foreachPartition(sendDataToDB))
# this line print to check the data IDs has gone through the app for a specific time
a = kafkaStream.map(lambda x:x[0])
a.pprint()
ssc.start()
# ssc.awaitTermination()
# ssc.start()
time.sleep(3000) # Run stream for 20 mins just to get the data for visualisation
# # ssc.awaitTermination()
ssc.stop(stopSparkContext=True,stopGraceFully=True)
```
| github_jupyter |
<a href="https://www.kaggle.com/aaroha33/text-summarization-attention-mechanism?scriptVersionId=85928705" target="_blank"><img align="left" alt="Kaggle" title="Open in Kaggle" src="https://kaggle.com/static/images/open-in-kaggle.svg"></a>
<font size="+5" color=Green > <b> <center><u>
<br>Text Summarization
<br>Sequenece to Sequence Modelling
<br>Attention Mechanism </u> </font>
# Import Libraries
```
#import all the required libraries
import numpy as np
import pandas as pd
import pickle
from statistics import mode
import nltk
from nltk import word_tokenize
from nltk.stem import LancasterStemmer
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from tensorflow.keras.models import Model
from tensorflow.keras import models
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import plot_model
from tensorflow.keras.layers import Input,LSTM,Embedding,Dense,Concatenate,Attention
from sklearn.model_selection import train_test_split
from bs4 import BeautifulSoup
import warnings
pd.set_option("display.max_colwidth", 200)
warnings.filterwarnings("ignore")
from tensorflow.keras.callbacks import EarlyStopping
```
# Parse the Data
We’ll take a sample of 100,000 reviews to reduce the training time of our model.
```
#read the dataset file for text Summarizer
df=pd.read_csv("../input/amazon-fine-food-reviews/Reviews.csv",nrows=10000)
# df = pd.read_csv("../input/amazon-fine-food-reviews/Reviews.csv")
#drop the duplicate and na values from the records
df.drop_duplicates(subset=['Text'],inplace=True)
df.dropna(axis=0,inplace=True) #dropping na
input_data = df.loc[:,'Text']
target_data = df.loc[:,'Summary']
target_data.replace('', np.nan, inplace=True)
df.info()
df['Summary'][:10]
df['Text'][:10]
```
# Preprocessing
Performing basic preprocessing steps is very important before we get to the model building part. Using messy and uncleaned text data is a potentially disastrous move. So in this step, we will drop all the unwanted symbols, characters, etc. from the text that do not affect the objective of our problem.
Here is the dictionary that we will use for expanding the contractions:
```
contraction_mapping = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not",
"didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not",
"he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is",
"I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would",
"i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would",
"it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam",
"mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have",
"mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock",
"oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have",
"she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is",
"should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as",
"this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would",
"there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have",
"they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have",
"wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are",
"we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are",
"what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is",
"where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have",
"why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have",
"would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all",
"y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have",
"you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have",
"you're": "you are", "you've": "you have"}
```
We can use the contraction using two method, one we can use the above dictionary or we can keep the contraction file as a data set and import it.
```
input_texts=[] # Text column
target_texts=[] # summary column
input_words=[]
target_words=[]
# contractions=pickle.load(open("../input/contraction/contractions.pkl","rb"))['contractions']
contractions = contraction_mapping
#initialize stop words and LancasterStemmer
stop_words=set(stopwords.words('english'))
stemm=LancasterStemmer()
```
# Data Cleaning
```
def clean(texts,src):
texts = BeautifulSoup(texts, "lxml").text #remove the html tags
words=word_tokenize(texts.lower()) #tokenize the text into words
#filter words which contains \
#integers or their length is less than or equal to 3
words= list(filter(lambda w:(w.isalpha() and len(w)>=3),words))
#contraction file to expand shortened words
words= [contractions[w] if w in contractions else w for w in words ]
#stem the words to their root word and filter stop words
if src=="inputs":
words= [stemm.stem(w) for w in words if w not in stop_words]
else:
words= [w for w in words if w not in stop_words]
return words
#pass the input records and target records
for in_txt,tr_txt in zip(input_data,target_data):
in_words= clean(in_txt,"inputs")
input_texts+= [' '.join(in_words)]
input_words+= in_words
#add 'sos' at start and 'eos' at end of text
tr_words= clean("sos "+tr_txt+" eos","target")
target_texts+= [' '.join(tr_words)]
target_words+= tr_words
#store only unique words from input and target list of words
input_words = sorted(list(set(input_words)))
target_words = sorted(list(set(target_words)))
num_in_words = len(input_words) #total number of input words
num_tr_words = len(target_words) #total number of target words
#get the length of the input and target texts which appears most often
max_in_len = mode([len(i) for i in input_texts])
max_tr_len = mode([len(i) for i in target_texts])
print("number of input words : ",num_in_words)
print("number of target words : ",num_tr_words)
print("maximum input length : ",max_in_len)
print("maximum target length : ",max_tr_len)
```
# Split it
```
#split the input and target text into 80:20 ratio or testing size of 20%.
x_train,x_test,y_train,y_test=train_test_split(input_texts,target_texts,test_size=0.2,random_state=0)
#train the tokenizer with all the words
in_tokenizer = Tokenizer()
in_tokenizer.fit_on_texts(x_train)
tr_tokenizer = Tokenizer()
tr_tokenizer.fit_on_texts(y_train)
#convert text into sequence of integers
#where the integer will be the index of that word
x_train= in_tokenizer.texts_to_sequences(x_train)
y_train= tr_tokenizer.texts_to_sequences(y_train)
#pad array of 0's if the length is less than the maximum length
en_in_data= pad_sequences(x_train, maxlen=max_in_len, padding='post')
dec_data= pad_sequences(y_train, maxlen=max_tr_len, padding='post')
#decoder input data will not include the last word
#i.e. 'eos' in decoder input data
dec_in_data = dec_data[:,:-1]
#decoder target data will be one time step ahead as it will not include
# the first word i.e 'sos'
dec_tr_data = dec_data.reshape(len(dec_data),max_tr_len,1)[:,1:]
```
# Model Building
```
K.clear_session()
latent_dim = 500
#create input object of total number of encoder words
en_inputs = Input(shape=(max_in_len,))
en_embedding = Embedding(num_in_words+1, latent_dim)(en_inputs)
#create 3 stacked LSTM layer with the shape of hidden dimension for text summarizer using deep learning
#LSTM 1
en_lstm1= LSTM(latent_dim, return_state=True, return_sequences=True)
en_outputs1, state_h1, state_c1= en_lstm1(en_embedding)
#LSTM2
en_lstm2= LSTM(latent_dim, return_state=True, return_sequences=True)
en_outputs2, state_h2, state_c2= en_lstm2(en_outputs1)
#LSTM3
en_lstm3= LSTM(latent_dim,return_sequences=True,return_state=True)
en_outputs3 , state_h3 , state_c3= en_lstm3(en_outputs2)
#encoder states
en_states= [state_h3, state_c3]
```
# Decoder
```
# Decoder.
dec_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_tr_words+1, latent_dim)
dec_embedding = dec_emb_layer(dec_inputs)
#initialize decoder's LSTM layer with the output states of encoder
dec_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
dec_outputs, *_ = dec_lstm(dec_embedding,initial_state=en_states)
```
# Attention Layer
```
#Attention layer
attention =Attention()
attn_out = attention([dec_outputs,en_outputs3])
#Concatenate the attention output with the decoder outputs
merge=Concatenate(axis=-1, name='concat_layer1')([dec_outputs,attn_out])
#Dense layer (output layer)
dec_dense = Dense(num_tr_words+1, activation='softmax')
dec_outputs = dec_dense(merge)
```
# Train the Model
```
#Model class and model summary for text Summarizer
model = Model([en_inputs, dec_inputs], dec_outputs)
model.summary()
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["accuracy"] )
history = model.fit(
[en_in_data, dec_in_data],
dec_tr_data,
batch_size=512,
epochs=10,
validation_split=0.1,)
# save model
model.save('Text_Summarizer.h5')
print('Model Saved!')
from matplotlib import pyplot
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
max_text_len=30
max_summary_len=8
```
### Next, let’s build the dictionary to convert the index to word for target and source vocabulary:
# Inference Model
### Encoder Inference:
```
# encoder inference
latent_dim=500
#/content/gdrive/MyDrive/Text Summarizer/
#load the model
model = models.load_model("Text_Summarizer.h5")
#construct encoder model from the output of 6 layer i.e.last LSTM layer
en_outputs,state_h_enc,state_c_enc = model.layers[6].output
en_states=[state_h_enc,state_c_enc]
#add input and state from the layer.
en_model = Model(model.input[0],[en_outputs]+en_states)
```
### Decoder Inference:
```
# decoder inference
#create Input object for hidden and cell state for decoder
#shape of layer with hidden or latent dimension
dec_state_input_h = Input(shape=(latent_dim,))
dec_state_input_c = Input(shape=(latent_dim,))
dec_hidden_state_input = Input(shape=(max_in_len,latent_dim))
# Get the embeddings and input layer from the model
dec_inputs = model.input[1]
dec_emb_layer = model.layers[5]
dec_lstm = model.layers[7]
dec_embedding= dec_emb_layer(dec_inputs)
#add input and initialize LSTM layer with encoder LSTM states.
dec_outputs2, state_h2, state_c2 = dec_lstm(dec_embedding, initial_state=[dec_state_input_h,dec_state_input_c])
```
### Attention Inference:
```
#Attention layer
attention = model.layers[8]
attn_out2 = attention([dec_outputs2,dec_hidden_state_input])
merge2 = Concatenate(axis=-1)([dec_outputs2, attn_out2])
```
### Dense layer
```
#Dense layer
dec_dense = model.layers[10]
dec_outputs2 = dec_dense(merge2)
# Finally define the Model Class
dec_model = Model(
[dec_inputs] + [dec_hidden_state_input,dec_state_input_h,dec_state_input_c],
[dec_outputs2] + [state_h2, state_c2])
#create a dictionary with a key as index and value as words.
reverse_target_word_index = tr_tokenizer.index_word
reverse_source_word_index = in_tokenizer.index_word
target_word_index = tr_tokenizer.word_index
def decode_sequence(input_seq):
# get the encoder output and states by passing the input sequence
en_out, en_h, en_c = en_model.predict(input_seq)
# target sequence with inital word as 'sos'
target_seq = np.zeros((1, 1))
target_seq[0, 0] = target_word_index['sos']
# if the iteration reaches the end of text than it will be stop the iteration
stop_condition = False
# append every predicted word in decoded sentence
decoded_sentence = ""
while not stop_condition:
# get predicted output, hidden and cell state.
output_words, dec_h, dec_c = dec_model.predict([target_seq] + [en_out, en_h, en_c])
# get the index and from the dictionary get the word for that index.
word_index = np.argmax(output_words[0, -1, :])
text_word = reverse_target_word_index[word_index]
decoded_sentence += text_word + " "
# Exit condition: either hit max length
# or find a stop word or last word.
if text_word == "eos" or len(decoded_sentence) > max_tr_len:
stop_condition = True
# update target sequence to the current word index.
target_seq = np.zeros((1, 1))
target_seq[0, 0] = word_index
en_h, en_c = dec_h, dec_c
# return the deocded sentence
return decoded_sentence
# inp_review = input("Enter : ")
inp_review = "Both the Google platforms provide a great cloud environment for any ML work to be deployed to. The features of them both are equally competent. Notebooks can be downloaded and later uploaded between the two. However, Colab comparatively provides greater flexibility to adjust the batch sizes.Saving or storing of models is easier on Colab since it allows them to be saved and stored to Google Drive. Also if one is using TensorFlow, using TPUs would be preferred on Colab. It is also faster than Kaggle. For a use case demanding more power and longer running processes, Colab is preferred."
print("Review :", inp_review)
inp_review = clean(inp_review, "inputs")
inp_review = ' '.join(inp_review)
inp_x = in_tokenizer.texts_to_sequences([inp_review])
inp_x = pad_sequences(inp_x, maxlen=max_in_len, padding='post')
summary = decode_sequence(inp_x.reshape(1, max_in_len))
if 'eos' in summary:
summary = summary.replace('eos', '')
print("\nPredicted summary:", summary);
print("\n")
```
| github_jupyter |
```
%matplotlib inline
```
# Brainstorm CTF phantom tutorial dataset
Here we compute the evoked from raw for the Brainstorm CTF phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf
References
----------
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
```
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import fit_dipole
from mne.datasets.brainstorm import bst_phantom_ctf
from mne.io import read_raw_ctf
print(__doc__)
```
The data were collected with a CTF system at 2400 Hz.
```
data_path = bst_phantom_ctf.data_path()
# Switch to these to use the higher-SNR data:
# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')
# dip_freq = 7.
raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')
dip_freq = 23.
erm_path = op.join(data_path, 'emptyroom_20150709_01.ds')
raw = read_raw_ctf(raw_path, preload=True)
```
The sinusoidal signal is generated on channel HDAC006, so we can use
that to obtain precise timing.
```
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]
plt.figure()
plt.plot(times[times < 1.], sinusoid.T[times < 1.])
```
Let's create some events using this signal by thresholding the sinusoid.
```
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp
events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T
```
The CTF software compensation works reasonably well:
```
raw.plot()
```
But here we can get slightly better noise suppression, lower localization
bias, and a better dipole goodness of fit with spatio-temporal (tSSS)
Maxwell filtering:
```
raw.apply_gradient_compensation(0) # must un-do software compensation first
mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)
raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)
raw.plot()
```
Our choice of tmin and tmax should capture exactly one cycle, so
we can make the unusual choice of baselining using the entire epoch
when creating our evoked data. We also then crop to a single time point
(@t=0) because this is a peak in our signal.
```
tmin = -0.5 / dip_freq
tmax = -tmin
epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,
baseline=(None, None))
evoked = epochs.average()
evoked.plot()
evoked.crop(0., 0.)
```
Let's use a sphere head geometry model and let's see the coordinate
alignement and the sphere location.
```
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
mne.viz.plot_alignment(raw.info, subject='sample',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain'])
del raw, epochs
```
To do a dipole fit, let's use the covariance provided by the empty room
recording.
```
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)
raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',
**mf_kwargs)
cov = mne.compute_raw_covariance(raw_erm)
del raw_erm
dip, residual = fit_dipole(evoked, cov, sphere)
```
Compare the actual position with the estimated one.
```
expected_pos = np.array([18., 0., 49.])
diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))
print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))
print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))
print('Difference: %0.1f mm' % diff)
print('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))
print('GOF: %0.1f %%' % dip.gof[0])
```
| github_jupyter |
# Ensemble Learning
## Initial Imports
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import confusion_matrix
from imblearn.metrics import classification_report_imbalanced
```
## Read the CSV and Perform Basic Data Cleaning
```
# Load the data
file_path = Path('lending_data.csv')
df = pd.read_csv(file_path)
# Preview the data
df.head()
# homeowner column is categorical, change to numerical so it can be scaled later on
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(df["homeowner"])
df["homeowner"] = label_encoder.transform(df["homeowner"])
df.head()
```
## Split the Data into Training and Testing
```
# Create our features
X = df.drop(columns="loan_status")
# Create our target
y = df["loan_status"].to_frame()
X.describe()
# Check the balance of our target values
y['loan_status'].value_counts()
# Split the X and y into X_train, X_test, y_train, y_test
# Create X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y)
X_train
```
## Data Pre-Processing
Scale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
```
# Create the StandardScaler instance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
X_scaler = scaler.fit(X_train)
# Scale the training and testing data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
## Ensemble Learners
In this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:
1. Train the model using the training data.
2. Calculate the balanced accuracy score from sklearn.metrics.
3. Display the confusion matrix from sklearn.metrics.
4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
5. For the Balanced Random Forest Classifier only, print the feature importance sorted in descending order (most important feature to least important) along with the feature score
Note: Use a random state of 1 for each algorithm to ensure consistency between tests
### Balanced Random Forest Classifier
```
# Resample the training data with the BalancedRandomForestClassifier
from imblearn.ensemble import BalancedRandomForestClassifier
brf = BalancedRandomForestClassifier(n_estimators=100, random_state=1) #100 trees
# random forest use 50/50 probability decision, so I think scaled data is not required
brf.fit(X_train, y_train)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred = brf.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
# List the features sorted in descending order by feature importance
importances = brf.feature_importances_
sorted(zip(brf.feature_importances_, X.columns), reverse=True)
```
### Easy Ensemble Classifier
```
# Train the Classifier
from imblearn.ensemble import EasyEnsembleClassifier
eec = EasyEnsembleClassifier(n_estimators=100, random_state=1)
eec.fit(X_train, y_train)
# Calculated the balanced accuracy score
y_pred = eec.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred))
```
### Final Questions
1. Which model had the best balanced accuracy score?
EEC has slightly better score, but the different is insignificant.
2. Which model had the best recall score?
Both models have the same recall score.
3. Which model had the best geometric mean score?
Both models have the same geometric mean score.
4. What are the top three features?
From Feature Importance, top 3 features are "Debt to Income", "Interest Rate" & "Borrower Income"
| github_jupyter |
<a href="https://colab.research.google.com/github/olgOk/XanaduTraining/blob/master/Xanadu3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install pennylane
pip install torch
pip install tensorflow
pip install sklearn
pip install pennylane-qiskit
import pennylane as qml
from pennylane import numpy as np
dev = qml.device("default.qubit", wires=2)
@qml.qnode(device=dev)
def cos_func(x, w):
qml.RX(x, wires=0)
qml.templates.BasicEntanglerLayers(w, wires=range(2))
return qml.expval(qml.PauliZ(0))
layer = 4
weights = qml.init.basic_entangler_layers_uniform(layer, 2)
xs = np.linspace(-np.pi, 4*np.pi, requires_grad=False)
ys = np.cos(xs)
opt = qml.AdamOptimizer()
epochs = 10
for epoch in range(epochs):
for x, y in zip(xs, ys):
cost = lambda weights:(cos_func(x, weights) - y) ** 2
weights = opt.step(cost, weights)
ys_trained = [cos_func(x, weights) for x in xs]
import matplotlib.pyplot as plt
plt.figure()
plt.plot(xs, ys_trained, marker="o", label="Cos(x")
plt.legend()
plt.show()
```
## Preparing GHZ state
Using the Autograd interface, train a circuit to prepare the 3-qubit W state:
$|W> = {1/sqrt(3)}(001|> + |010> + |100>)
```
qubits = 3
w = np.array([0, 1, 1, 0, 1, 0, 0, 0]) / np.sqrt(3)
w_projector = w[:, np.newaxis] * w
w_decomp = qml.utils.decompose_hamiltonian(w_projector)
H = qml.Hamiltonian(*w_decomp)
def prepare_w(weights, wires):
qml.templates.StronglyEntanglingLayers(weights, wires=wires)
dev = qml.device("default.qubit", wires=qubits)
qnodes = qml.map(prepare_w, H.ops, dev)
w_overlap = qml.dot(H.coeffs, qnodes)
layers = 4
weights = qml.init.strong_ent_layers_uniform(layers, qubits)
opt = qml.RMSPropOptimizer()
epochs = 50
for i in range(epochs):
weights = opt.step(lambda weights: -w_overlap(weights), weights)
if i % 5 == 0:
print(i, w_overlap(weights))
output_overlap = w_overlap(weights)
output_state = np.round(dev.state, 3)
```
##Quantum-based Optimization
```
dev = qml.device('default.qubit', wires=1)
@qml.qnode(dev)
def rotation(thetas):
qml.RX(1, wires=0)
qml.RZ(1, wires=0)
qml.RX(thetas[0], wires=0)
qml.RY(thetas[1], wires=0)
return qml.expval(qml.PauliZ(0))
opt = qml.RotoselectOptimizer()
import sklearn.datasets
data = sklearn.datasets.load_iris()
x = data["data"]
y = data["target"]
np.random.seed(1967)
x, y = zip(*np.random.permutation(list(zip(x, y))))
split = 125
x_train = x[:split]
x_test = x[split:]
y_train = y[:split]
y_test = y[split:]
```
| github_jupyter |
# Regular Expressions
Regular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the <code>re</code> module with Python for this lecture.
Let's get started!
## Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
```
import re
# List of patterns to search for
patterns = ['term1', 'term2']
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print('Searching for "%s" in:\n "%s"\n' %(pattern,text))
#Check for match
if re.search(pattern,text):
print('Match was found. \n')
else:
print('No Match was found.\n')
```
Now we've seen that <code>re.search()</code> will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:
```
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern,text)
type(match)
```
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
```
# Show start of match
match.start()
# Show end
match.end()
```
## Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
```
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: [email protected]'
# Split the phrase
re.split(split_term,phrase)
```
Note how <code>re.split()</code> returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
## Finding all instances of a pattern
You can use <code>re.findall()</code> to find all the instances of a pattern in a string. For example:
```
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
```
## re Pattern Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred.
We can use *metacharacters* along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
```
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print('Searching the phrase using the re check: %r' %(pattern))
print(re.findall(pattern,phrase))
print('\n')
```
### Repetition Syntax
There are five ways to express repetition in a pattern:
1. A pattern followed by the meta-character <code>*</code> is repeated zero or more times.
2. Replace the <code>*</code> with <code>+</code> and the pattern must appear at least once.
3. Using <code>?</code> means the pattern appears zero or one time.
4. For a specific number of occurrences, use <code>{m}</code> after the pattern, where **m** is replaced with the number of times the pattern should repeat.
5. Use <code>{m,n}</code> where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** <code>{m,}</code> means the value appears at least **m** times, with no maximum.
Now we will see an example of each of these using our multi_re_find function:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
```
## Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input <code>[ab]</code> searches for occurrences of either **a** or **b**.
Let's see some examples:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = ['[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
```
It makes sense that the first input <code>[sd]</code> returns every instance of s or d. Also, the second input <code>s[sd]+</code> returns any full strings that begin with an s and continue with s or d characters until another character is reached.
## Exclusion
We can use <code>^</code> to exclude terms by incorporating it into the bracket syntax notation. For example: <code>[^...]</code> will match any single character not in the brackets. Let's see some examples:
```
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
```
Use <code>[^!.? ]</code> to check for matches that are not a !,.,?, or space. Add a <code>+</code> to check that the match appears at least once. This basically translates into finding the words.
```
re.findall('[^!.? ]+',test_phrase)
```
## Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is <code>[start-end]</code>.
Common use cases are to search for a specific range of letters in the alphabet. For instance, <code>[a-f]</code> would return matches with any occurrence of letters between a and f.
Let's walk through some examples:
```
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=['[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
```
## Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Code</th>
<th class="head">Meaning</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>a digit</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>a non-digit</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>whitespace (tab, space, newline, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>non-whitespace</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alphanumeric</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>non-alphanumeric</td>
</tr>
</tbody>
</table>
Escapes are indicated by prefixing the character with a backslash <code>\</code>. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with <code>r</code>, eliminates this problem and maintains readability.
Personally, I think this use of <code>r</code> to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
```
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
```
## Conclusion
You should now have a solid understanding of how to use the regular expression module in Python. There are a ton of more special character instances, but it would be unreasonable to go through every single use case. Instead take a look at the full [documentation](https://docs.python.org/3/library/re.html#regular-expression-syntax) if you ever need to look up a particular pattern.
You can also check out the nice summary tables at this [source](http://www.tutorialspoint.com/python/python_reg_expressions.htm).
Good job!
| github_jupyter |
# PTN Template
This notebook serves as a template for single dataset PTN experiments
It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where)
But it is intended to be executed as part of a *papermill.py script. See any of the
experimentes with a papermill script to get started with that workflow.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Required Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "tuned_1v2:oracle.run2",
"device": "cuda",
"lr": 0.0001,
"labels_source": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"labels_target": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"episode_transforms_source": [],
"episode_transforms_target": [],
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
"num_examples_per_domain_per_label_source": -1,
"num_examples_per_domain_per_label_target": -1,
"n_shot": 3,
"n_way": 16,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"x_transforms_source": ["unit_mag"],
"x_transforms_target": ["unit_mag"],
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Recommender Systems 2018/19
### Practice 4 - Similarity with Cython
### Cython is a superset of Python, allowing you to use C-like operations and import C code. Cython files (.pyx) are compiled and support static typing.
```
import time
import numpy as np
```
### Let's implement something simple
```
def isPrime(n):
i = 2
# Usually you loop up to sqrt(n)
while i < n:
if n % i == 0:
return False
i += 1
return True
print("Is prime 2? {}".format(isPrime(2)))
print("Is prime 3? {}".format(isPrime(3)))
print("Is prime 5? {}".format(isPrime(5)))
print("Is prime 15? {}".format(isPrime(15)))
print("Is prime 20? {}".format(isPrime(20)))
start_time = time.time()
result = isPrime(80000023)
print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
```
#### Load Cython magic command, this takes care of the compilation step. If you are writing code outside Jupyter you'll have to compile using other tools
```
%load_ext Cython
```
#### Declare Cython function, paste the same code as before. The function will be compiled and then executed with a Python interface
```
%%cython
def isPrime(n):
i = 2
# Usually you loop up to sqrt(n)
while i < n:
if n % i == 0:
return False
i += 1
return True
start_time = time.time()
result = isPrime(80000023)
print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
```
#### As you can see by just compiling the same code we got some improvement.
#### To go seriously higher, we have to use some static tiping
```
%%cython
# Declare the tipe of the arguments
def isPrime(long n):
# Declare index of for loop
cdef long i
i = 2
# Usually you loop up to sqrt(n)
while i < n:
if n % i == 0:
return False
i += 1
return True
start_time = time.time()
result = isPrime(80000023)
print("Is Prime 80000023? {}, time required {:.2f} sec".format(result, time.time()-start_time))
```
#### Cython code with two tipe declaration, for n and i, runs 50x faster than Python
#### Main benefits of Cython:
* Compiled, no interpreter
* Static typing, no overhead
* Fast loops, no need to vectorize. Vectorization sometimes performes lots of useless operations
* Numpy, which is fast in python, becomes often slooooow compared to a carefully written Cython code
### Similarity with Cython
#### Load the usual data.
```
from urllib.request import urlretrieve
import zipfile
# skip the download
#urlretrieve ("http://files.grouplens.org/datasets/movielens/ml-10m.zip", "data/Movielens_10M/movielens_10m.zip")
dataFile = zipfile.ZipFile("data/Movielens_10M/movielens_10m.zip")
URM_path = dataFile.extract("ml-10M100K/ratings.dat", path = "data/Movielens_10M")
URM_file = open(URM_path, 'r')
def rowSplit (rowString):
split = rowString.split("::")
split[3] = split[3].replace("\n","")
split[0] = int(split[0])
split[1] = int(split[1])
split[2] = float(split[2])
split[3] = int(split[3])
result = tuple(split)
return result
URM_file.seek(0)
URM_tuples = []
for line in URM_file:
URM_tuples.append(rowSplit (line))
userList, itemList, ratingList, timestampList = zip(*URM_tuples)
userList = list(userList)
itemList = list(itemList)
ratingList = list(ratingList)
timestampList = list(timestampList)
import scipy.sparse as sps
URM_all = sps.coo_matrix((ratingList, (userList, itemList)))
URM_all = URM_all.tocsr()
URM_all
from Notebooks_utils.data_splitter import train_test_holdout
URM_train, URM_test = train_test_holdout(URM_all, train_perc = 0.8)
URM_train
```
#### Since we cannot store in memory the whole similarity, we compute it one row at a time
```
itemIndex=1
item_ratings = URM_train[:,itemIndex]
item_ratings = item_ratings.toarray().squeeze()
item_ratings.shape
this_item_weights = URM_train.T.dot(item_ratings)
this_item_weights.shape
```
#### Once we have the scores for that row, we get the TopK
```
k=10
top_k_idx = np.argsort(this_item_weights) [-k:]
top_k_idx
import scipy.sparse as sps
# Function hiding some conversion checks
def check_matrix(X, format='csc', dtype=np.float32):
if format == 'csc' and not isinstance(X, sps.csc_matrix):
return X.tocsc().astype(dtype)
elif format == 'csr' and not isinstance(X, sps.csr_matrix):
return X.tocsr().astype(dtype)
elif format == 'coo' and not isinstance(X, sps.coo_matrix):
return X.tocoo().astype(dtype)
elif format == 'dok' and not isinstance(X, sps.dok_matrix):
return X.todok().astype(dtype)
elif format == 'bsr' and not isinstance(X, sps.bsr_matrix):
return X.tobsr().astype(dtype)
elif format == 'dia' and not isinstance(X, sps.dia_matrix):
return X.todia().astype(dtype)
elif format == 'lil' and not isinstance(X, sps.lil_matrix):
return X.tolil().astype(dtype)
else:
return X.astype(dtype)
```
#### Create a Basic Collaborative filtering recommender using only cosine similarity
```
class BasicItemKNN_CF_Recommender(object):
""" ItemKNN recommender with cosine similarity and no shrinkage"""
def __init__(self, URM):
self.dataset = URM
def compute_similarity(self, URM):
# We explore the matrix column-wise
URM = check_matrix(URM, 'csc')
values = []
rows = []
cols = []
start_time = time.time()
processedItems = 0
# Compute all similarities for each item using vectorization
for itemIndex in range(URM.shape[0]):
processedItems += 1
if processedItems % 100==0:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format(
processedItems, itemPerSec, URM.shape[0]/itemPerSec/60))
# All ratings for a given item
item_ratings = URM[:,itemIndex]
item_ratings = item_ratings.toarray().squeeze()
# Compute item similarities
this_item_weights = URM_train.T.dot(item_ratings)
# Sort indices and select TopK
top_k_idx = np.argsort(this_item_weights) [-self.k:]
# Incrementally build sparse matrix
values.extend(this_item_weights[top_k_idx])
rows.extend(np.arange(URM.shape[0])[top_k_idx])
cols.extend(np.ones(self.k) * itemIndex)
self.W_sparse = sps.csc_matrix((values, (rows, cols)),
shape=(URM.shape[0], URM.shape[0]),
dtype=np.float32)
def fit(self, k=50, shrinkage=100):
self.k = k
self.shrinkage = shrinkage
item_weights = self.compute_similarity(self.dataset)
item_weights = check_matrix(item_weights, 'csr')
def recommend(self, user_id, at=None, exclude_seen=True):
# compute the scores using the dot product
user_profile = self.URM[user_id]
scores = user_profile.dot(self.W_sparse).toarray().ravel()
if exclude_seen:
scores = self.filter_seen(user_id, scores)
# rank items
ranking = scores.argsort()[::-1]
return ranking[:at]
def filter_seen(self, user_id, scores):
start_pos = self.URM.indptr[user_id]
end_pos = self.URM.indptr[user_id+1]
user_profile = self.URM.indices[start_pos:end_pos]
scores[user_profile] = -np.inf
return scores
```
#### Let's isolate the compute_similarity function
```
def compute_similarity(URM, k=100):
# We explore the matrix column-wise
URM = check_matrix(URM, 'csc')
n_items = URM.shape[0]
values = []
rows = []
cols = []
start_time = time.time()
processedItems = 0
# Compute all similarities for each item using vectorization
# for itemIndex in range(n_items):
for itemIndex in range(1000):
processedItems += 1
if processedItems % 100==0:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format(
processedItems, itemPerSec, n_items/itemPerSec/60))
# All ratings for a given item
item_ratings = URM[:,itemIndex]
item_ratings = item_ratings.toarray().squeeze()
# Compute item similarities
this_item_weights = URM.T.dot(item_ratings)
# Sort indices and select TopK
top_k_idx = np.argsort(this_item_weights) [-k:]
# Incrementally build sparse matrix
values.extend(this_item_weights[top_k_idx])
rows.extend(np.arange(URM.shape[0])[top_k_idx])
cols.extend(np.ones(k) * itemIndex)
W_sparse = sps.csc_matrix((values, (rows, cols)),
shape=(n_items, n_items),
dtype=np.float32)
return W_sparse
compute_similarity(URM_train)
```
### We see that computing the similarity takes more or less 15 minutes
### Now we use the same identical code, but we compile it
```
%%cython
import time
import numpy as np
import scipy.sparse as sps
def compute_similarity_compiled(URM, k=100):
# We explore the matrix column-wise
URM = URM.tocsc()
n_items = URM.shape[0]
values = []
rows = []
cols = []
start_time = time.time()
processedItems = 0
# Compute all similarities for each item using vectorization
# for itemIndex in range(n_items):
for itemIndex in range(1000):
processedItems += 1
if processedItems % 100==0:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format(
processedItems, itemPerSec, n_items/itemPerSec/60))
# All ratings for a given item
item_ratings = URM[:,itemIndex]
item_ratings = item_ratings.toarray().squeeze()
# Compute item similarities
this_item_weights = URM.T.dot(item_ratings)
# Sort indices and select TopK
top_k_idx = np.argsort(this_item_weights) [-k:]
# Incrementally build sparse matrix
values.extend(this_item_weights[top_k_idx])
rows.extend(np.arange(URM.shape[0])[top_k_idx])
cols.extend(np.ones(k) * itemIndex)
W_sparse = sps.csc_matrix((values, (rows, cols)),
shape=(n_items, n_items),
dtype=np.float32)
return W_sparse
compute_similarity_compiled(URM_train)
```
#### As opposed to the previous example, compilation by itself is not very helpful. Why?
#### Because the compiler is just porting in C all operations that the python interpreter would have to perform, dynamic tiping included
### Now try to add some tipes
```
%%cython
import time
import numpy as np
import scipy.sparse as sps
cimport numpy as np
def compute_similarity_compiled(URM, int k=100):
cdef int itemIndex, processedItems
# We use the numpy syntax, allowing us to perform vectorized operations
cdef np.ndarray[double, ndim=1] item_ratings, this_item_weights
cdef np.ndarray[long, ndim=1] top_k_idx
# We explore the matrix column-wise
URM = URM.tocsc()
n_items = URM.shape[0]
values = []
rows = []
cols = []
start_time = time.time()
processedItems = 0
# Compute all similarities for each item using vectorization
# for itemIndex in range(n_items):
for itemIndex in range(1000):
processedItems += 1
if processedItems % 100==0:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format(
processedItems, itemPerSec, n_items/itemPerSec/60))
# All ratings for a given item
item_ratings = URM[:,itemIndex].toarray().squeeze()
# Compute item similarities
this_item_weights = URM.T.dot(item_ratings)
# Sort indices and select TopK
top_k_idx = np.argsort(this_item_weights) [-k:]
# Incrementally build sparse matrix
values.extend(this_item_weights[top_k_idx])
rows.extend(np.arange(URM.shape[0])[top_k_idx])
cols.extend(np.ones(k) * itemIndex)
W_sparse = sps.csc_matrix((values, (rows, cols)),
shape=(n_items, n_items),
dtype=np.float32)
return W_sparse
compute_similarity_compiled(URM_train)
```
### Still no luck! Why?
### There are a few reasons:
* We are getting the data from the sparse matrix using its interface, which is SLOW
* We are transforming sparse data into a dense array, which is SLOW
* We are performing a dot product against a dense vector
#### You colud find a workaround... here we do something different
### Proposed solution
### Change the algorithm!
### Instead of performing the dot product, let's implement somenting that computes the similarity using sparse data directly
### We loop through the data and update selectively the similarity matrix cells.
### Underlying idea:
* When I select an item I can know which users rated it
* Instead of looping through the other items trying to find common users, I use the URM to find which other items that user rated
* The user I am considering will be common between the two, so I increment the similarity of the two items
* Instead of following the path item1 -> loop item2 -> find user, i go item1 -> loop user -> loop item2
```
data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]])
data_matrix = sps.csc_matrix(data_matrix)
data_matrix.todense()
```
### Example: Compute the similarities for item 1
#### Step 1: get users that rated item 1
```
users_rated_item = data_matrix[:,1]
users_rated_item.indices
```
#### Step 2: count how many times those users rated other items
```
item_similarity = data_matrix[users_rated_item.indices].sum(axis = 0)
np.array(item_similarity).squeeze()
```
#### Verify our result against the common method. We can see that the similarity values for col 1 are identical
```
similarity_matrix_product = data_matrix.T.dot(data_matrix)
similarity_matrix_product.toarray()[:,1]
# The following code works for implicit feedback only
def compute_similarity_new_algorithm(URM, k=100):
# We explore the matrix column-wise
URM = check_matrix(URM, 'csc')
URM.data = np.ones_like(URM.data)
n_items = URM.shape[0]
values = []
rows = []
cols = []
start_time = time.time()
processedItems = 0
# Compute all similarities for each item using vectorization
# for itemIndex in range(n_items):
for itemIndex in range(1000):
processedItems += 1
if processedItems % 100==0:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {}, {:.2f} item/sec, required time {:.2f} min".format(
processedItems, itemPerSec, n_items/itemPerSec/60))
# All ratings for a given item
users_rated_item = URM.indices[URM.indptr[itemIndex]:URM.indptr[itemIndex+1]]
# Compute item similarities
this_item_weights = URM[users_rated_item].sum(axis = 0)
this_item_weights = np.array(this_item_weights).squeeze()
# Sort indices and select TopK
top_k_idx = np.argsort(this_item_weights) [-k:]
# Incrementally build sparse matrix
values.extend(this_item_weights[top_k_idx])
rows.extend(np.arange(URM.shape[0])[top_k_idx])
cols.extend(np.ones(k) * itemIndex)
W_sparse = sps.csc_matrix((values, (rows, cols)),
shape=(n_items, n_items),
dtype=np.float32)
return W_sparse
compute_similarity_new_algorithm(URM_train)
```
#### Slower but expected, dot product operations are implemented in an efficient way and here we are using an indirect approach
### Now let's write this algorithm in Cython
```
%%cython
import time
import numpy as np
cimport numpy as np
from cpython.array cimport array, clone
import scipy.sparse as sps
cdef class Cosine_Similarity:
cdef int TopK
cdef long n_items
# Arrays containing the sparse data
cdef int[:] user_to_item_row_ptr, user_to_item_cols
cdef int[:] item_to_user_rows, item_to_user_col_ptr
cdef double[:] user_to_item_data, item_to_user_data
# In case you select no TopK
cdef double[:,:] W_dense
def __init__(self, URM, TopK = 100):
"""
Dataset must be a matrix with items as columns
:param dataset:
:param TopK:
"""
super(Cosine_Similarity, self).__init__()
self.n_items = URM.shape[1]
self.TopK = min(TopK, self.n_items)
URM = URM.tocsr()
self.user_to_item_row_ptr = URM.indptr
self.user_to_item_cols = URM.indices
self.user_to_item_data = np.array(URM.data, dtype=np.float64)
URM = URM.tocsc()
self.item_to_user_rows = URM.indices
self.item_to_user_col_ptr = URM.indptr
self.item_to_user_data = np.array(URM.data, dtype=np.float64)
if self.TopK == 0:
self.W_dense = np.zeros((self.n_items,self.n_items))
cdef int[:] getUsersThatRatedItem(self, long item_id):
return self.item_to_user_rows[self.item_to_user_col_ptr[item_id]:self.item_to_user_col_ptr[item_id+1]]
cdef int[:] getItemsRatedByUser(self, long user_id):
return self.user_to_item_cols[self.user_to_item_row_ptr[user_id]:self.user_to_item_row_ptr[user_id+1]]
cdef double[:] computeItemSimilarities(self, long item_id_input):
"""
For every item the cosine similarity against other items depends on whether they have users in common.
The more common users the higher the similarity.
The basic implementation is:
- Select the first item
- Loop through all other items
-- Given the two items, get the users they have in common
-- Update the similarity considering all common users
That is VERY slow due to the common user part, in which a long data structure is looped multiple times.
A better way is to use the data structure in a different way skipping the search part, getting directly
the information we need.
The implementation here used is:
- Select the first item
- Initialize a zero valued array for the similarities
- Get the users who rated the first item
- Loop through the users
-- Given a user, get the items he rated (second item)
-- Update the similarity of the items he rated
"""
# Create template used to initialize an array with zeros
# Much faster than np.zeros(self.n_items)
cdef array[double] template_zero = array('d')
cdef array[double] result = clone(template_zero, self.n_items, zero=True)
cdef long user_index, user_id, item_index, item_id_second
cdef int[:] users_that_rated_item = self.getUsersThatRatedItem(item_id_input)
cdef int[:] items_rated_by_user
cdef double rating_item_input, rating_item_second
# Get users that rated the items
for user_index in range(len(users_that_rated_item)):
user_id = users_that_rated_item[user_index]
rating_item_input = self.item_to_user_data[self.item_to_user_col_ptr[item_id_input]+user_index]
# Get all items rated by that user
items_rated_by_user = self.getItemsRatedByUser(user_id)
for item_index in range(len(items_rated_by_user)):
item_id_second = items_rated_by_user[item_index]
# Do not compute the similarity on the diagonal
if item_id_second != item_id_input:
# Increment similairty
rating_item_second = self.user_to_item_data[self.user_to_item_row_ptr[user_id]+item_index]
result[item_id_second] += rating_item_input*rating_item_second
return result
def compute_similarity(self):
cdef int itemIndex, innerItemIndex
cdef long long topKItemIndex
cdef long long[:] top_k_idx
# Declare numpy data type to use vetor indexing and simplify the topK selection code
cdef np.ndarray[long, ndim=1] top_k_partition, top_k_partition_sorting
cdef np.ndarray[np.float64_t, ndim=1] this_item_weights_np
#cdef long[:] top_k_idx
cdef double[:] this_item_weights
cdef long processedItems = 0
# Data structure to incrementally build sparse matrix
# Preinitialize max possible length
cdef double[:] values = np.zeros((self.n_items*self.TopK))
cdef int[:] rows = np.zeros((self.n_items*self.TopK,), dtype=np.int32)
cdef int[:] cols = np.zeros((self.n_items*self.TopK,), dtype=np.int32)
cdef long sparse_data_pointer = 0
start_time = time.time()
# Compute all similarities for each item
for itemIndex in range(self.n_items):
processedItems += 1
if processedItems % 10000==0 or processedItems==self.n_items:
itemPerSec = processedItems/(time.time()-start_time)
print("Similarity item {} ( {:2.0f} % ), {:.2f} item/sec, required time {:.2f} min".format(
processedItems, processedItems*1.0/self.n_items*100, itemPerSec, (self.n_items-processedItems) / itemPerSec / 60))
this_item_weights = self.computeItemSimilarities(itemIndex)
if self.TopK == 0:
for innerItemIndex in range(self.n_items):
self.W_dense[innerItemIndex,itemIndex] = this_item_weights[innerItemIndex]
else:
# Sort indices and select TopK
# Using numpy implies some overhead, unfortunately the plain C qsort function is even slower
# top_k_idx = np.argsort(this_item_weights) [-self.TopK:]
# Sorting is done in three steps. Faster then plain np.argsort for higher number of items
# because we avoid sorting elements we already know we don't care about
# - Partition the data to extract the set of TopK items, this set is unsorted
# - Sort only the TopK items, discarding the rest
# - Get the original item index
this_item_weights_np = - np.array(this_item_weights)
# Get the unordered set of topK items
top_k_partition = np.argpartition(this_item_weights_np, self.TopK-1)[0:self.TopK]
# Sort only the elements in the partition
top_k_partition_sorting = np.argsort(this_item_weights_np[top_k_partition])
# Get original index
top_k_idx = top_k_partition[top_k_partition_sorting]
# Incrementally build sparse matrix
for innerItemIndex in range(len(top_k_idx)):
topKItemIndex = top_k_idx[innerItemIndex]
values[sparse_data_pointer] = this_item_weights[topKItemIndex]
rows[sparse_data_pointer] = topKItemIndex
cols[sparse_data_pointer] = itemIndex
sparse_data_pointer += 1
if self.TopK == 0:
return np.array(self.W_dense)
else:
values = np.array(values[0:sparse_data_pointer])
rows = np.array(rows[0:sparse_data_pointer])
cols = np.array(cols[0:sparse_data_pointer])
W_sparse = sps.csr_matrix((values, (rows, cols)),
shape=(self.n_items, self.n_items),
dtype=np.float32)
return W_sparse
cosine_cython = Cosine_Similarity(URM_train, TopK=100)
start_time = time.time()
cosine_cython.compute_similarity()
print("Similarity computed in {:.2f} seconds".format(time.time()-start_time))
```
### Better... much better. There are a few other things you could do, but at this point it is not worth the effort
## How to use Cython outside a notebook
### Step1: Create a .pyx file and write your code
### Step2: Create a compilation script "compileCython.py" with the following content
```
# This code will not run in a notebook cell
try:
from setuptools import setup
from setuptools import Extension
except ImportError:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
import sys
import re
if len(sys.argv) != 4:
raise ValueError("Wrong number of paramethers received. Expected 4, got {}".format(sys.argv))
# Get the name of the file to compile
fileToCompile = sys.argv[1]
# Remove the argument from sys argv in order for it to contain only what setup needs
del sys.argv[1]
extensionName = re.sub("\.pyx", "", fileToCompile)
ext_modules = Extension(extensionName,
[fileToCompile],
extra_compile_args=['-O3'],
include_dirs=[numpy.get_include(),],
)
setup(
cmdclass={'build_ext': build_ext},
ext_modules=[ext_modules]
)
```
### Step3: Compile your code with the following command
python compileCython.py Cosine_Similarity_Cython.pyx build_ext --inplace
### Step4: Generate cython report and look for "yellow lines". The report is an .html file which represents how many operations are necessary to translate each python operation in cython code. If a line is white, it has a direct C translation. If it is yellow it will require many indirect steps that will slow down execution. Some of those steps may be inevitable, some may be removed via static typing.
### IMPORTANT: white does not mean fast!! If a system call is involved that part might be slow anyway.
cython -a Cosine_Similarity_Cython.pyx
### Step5: Add static types and C functions to remove "yellow" lines.
#### If you use a variable only as a C object, use primitive tipes
cdef int namevar
def double namevar
cdef float namevar
#### If you call a function only within C code, use a specific declaration "cdef"
cdef function_name(self, int param1, double param2):
...
## Step6: Iterate step 4 and 5 until you are satisfied with how clean your code is, then compile. An example of non optimized code can be found in the source folder of this notebook with the _SLOW suffix
## Step7: the compilation generates a file wose name is something like "Cosine_Similarity_Cython.cpython-36m-x86_64-linux-gnu.so" and tells you the source file, the architecture it is compiled for and the OS
## Step8: Import and use the compiled file as if it were a python class
```
from Base.Simialrity.Cython.Cosine_Similarity_Cython import Cosine_Similarity
cosine_cython = Cosine_Similarity(URM_train, TopK=100)
start_time = time.time()
cosine_cython.compute_similarity()
print("Similarity computed in {:.2f} seconds".format(time.time()-start_time))
```
| github_jupyter |
# 15 PDEs: Solution with Time Stepping
## Heat Equation
The **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/15_PDEs/15_PDEs_LectureNotes_HeatEquation.pdf))
$$
\frac{\partial T(\mathbf{x}, t)}{\partial t} = \frac{K}{C\rho} \nabla^2 T(\mathbf{x}, t),
$$
## Problem: insulated metal bar (1D heat equation)
A metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$.
### Analytic solution
Solve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is
$$
T(x, t) = \sum_{n=1}^{+\infty} A_n \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right), \quad k_n = \frac{n\pi}{L}
$$
The specific solution that satisfies $T(x, 0) = T_0 = 100^\circ\text{C}$ leads to $A_n = 4 T_0/n\pi$ for $n$ odd:
$$
T(x, t) = \sum_{n=1,3,5,\dots}^{+\infty} \frac{4 T_0}{n \pi} \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right)
$$
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000):
T = np.zeros_like(x)
eta = K / (C*rho)
for n in range(1, nmax, 2):
kn = n*np.pi/L
T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t)
return T
T0 = 100.
L = 1.0
X = np.linspace(0, L, 100)
for t in np.linspace(0, 3000, 50):
plt.plot(X, T_bar(X, t, T0, L))
plt.xlabel(r"$x$ (m)")
plt.ylabel(r"$T$ ($^\circ$C)");
```
### Numerical solution: Leap frog
Discretize (finite difference):
For the time domain we only have the initial values so we use a simple forward difference for the time derivative:
$$
\frac{\partial T(x,t)}{\partial t} \approx \frac{T(x, t+\Delta t) - T(x, t)}{\Delta t}
$$
For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:
$$
\frac{\partial^2 T(x, t)}{\partial x^2} \approx \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}
$$
Thus, the heat equation can be written as the finite difference equation
$$
\frac{T(x, t+\Delta t) - T(x, t)}{\Delta t} = \frac{K}{C\rho} \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}
$$
which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \Delta x$, $t = t_0 + j \Delta t$.
$$
T_{i, j+1} = (1 - 2\eta) T_{i,j} + \eta(T_{i+1,j} + T_{i-1, j}), \quad \eta := \frac{K \Delta t}{C \rho \Delta x^2}
$$
Thus we can step forward in time ("leap frog"), using only known values.
### Solve the 1D heat equation numerically for an iron bar
* $K = 237$ W/mK
* $C = 900$ J/K
* $\rho = 2700$ kg/m<sup>3</sup>
* $L = 1$ m
* $T_0 = 373$ K and $T_b = 273$ K
* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$
#### Key considerations
The key line is the computation of the new temperature field at time step $j+1$ from the temperature distribution at time step $j$. It can be written purely with numpy array operations (see last lecture!):
```python
T[1:-1] = (1 - 2*eta) * T[1:-1] + eta * (T[2:] + T[:-2])
```
Note that the range operator `T[start:end]` *excludes* `end`, so in order to include `T[1], T[2], ..., T[-2]` (but not the rightmost `T[-1]`) we have to use `T[1:-1]`.
The *boundary conditions* are fixed for all times:
```python
T[0] = T[-1] = Tb
```
The *initial conditions* (at time step `j=0`)
```python
T[1:-1] = T0
```
are only used to compute the distribution of temperatures at the next step `j=1`.
#### Solution
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
```
For HTML/nbviewer output, use inline:
```
%matplotlib inline
L_rod = 1. # m
t_max = 3000. # s
Dx = 0.02 # m
Dt = 2 # s
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
T0 = 373 # K
Tb = 273 # K
eta = Kappa * Dt / (CHeat * rho * Dx**2)
eta2 = 1 - 2*eta
step = 20 # plot solution every n steps
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_plot = np.zeros((Nt//step + 1, Nx))
# initial conditions
T[1:-1] = T0
# boundary conditions
T[0] = T[-1] = Tb
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])
if jt % step == 0 or jt == Nt-1:
t_index += 1
T_plot[t_index, :] = T
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
```
#### Visualization
Visualize (you can use the code as is).
Note how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.
```
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
```
2D as above for the analytical solution…
```
X = Dx * np.arange(T_plot.shape[1])
plt.plot(X, T_plot.T)
plt.xlabel(r"$x$ (m)")
plt.ylabel(r"$T$ (K)");
```
#### Slower solution
I benchmarked this slow solution at 89.7 ms and the fast solution at 14.8 ms (commented out all `print`) so the explicit loop is not that much worse (probably because the overhead on array copying etc is high).
```
L_rod = 1. # m
t_max = 3000. # s
Dx = 0.02 # m
Dt = 2 # s
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
T0 = 373 # K
Tb = 273 # K
eta = Kappa * Dt / (CHeat * rho * Dx**2)
eta2 = 1 - 2*eta
step = 20 # plot solution every n steps
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_new = np.zeros_like(T)
T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx))
# initial conditions
T[1:-1] = T0
# boundary conditions
T[0] = T[-1] = Tb
T_new[:] = T
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
# T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])
for ix in range(1, Nx-1):
T_new[ix] = eta2 * T[ix] + eta*(T[ix+1] + T[ix-1])
T[:] = T_new
if jt % step == 0 or jt == Nt-1:
t_index += 1
T_plot[t_index, :] = T
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
```
## Stability of the solution
### Empirical investigation of the stability
Investigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?
Report `Dt`, `Dx`, and `eta`
* for 3 stable solutions
* for 3 unstable solutions
```
def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273,
step=20):
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
eta = Kappa * Dt / (CHeat * rho * Dx**2)
eta2 = 1 - 2*eta
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx))
# initial conditions
T[1:-1] = T0
# boundary conditions
T[0] = T[-1] = Tb
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])
if jt % step == 0 or jt == Nt-1:
t_index += 1
T_plot[t_index, :] = T
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
return T_plot
def plot_T(T_plot, Dx, Dt, step):
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
return ax
T_plot = calculate_T(Dx=0.01, Dt=2, step=20)
plot_T(T_plot, 0.01, 2, 20)
```
Note that *decreasing* the value of $\Delta x$ made the solution *unstable*. This is strange, we have gotten used to the idea that working on a finer mesh will increase the detail (until we hit round-off error) and just become computationally more expensive. But here the algorithm suddenly becomes unstable (and it is not just round-off).
For certain combination of values of $\Delta t$ and $\Delta x$ the solution become unstable. Empirically, bigger $\eta$ leads to instability. (In fact, $\eta \geq \frac{1}{2}$ is unstable for the leapfrog algorithm as we will see.)
### Von Neumann stability analysis
If the difference equation solution diverges then we *know* that we have a bad approximation to the original PDE.
Von Neumann stability analysis starts from the assumption that *eigenmodes* of the difference equation can be written as
$$
T_{m,j} = \xi(k)^j e^{ikm\Delta x}, \quad t=j\Delta t,\ x=m\Delta x
$$
with the unknown wave vectors $k=2\pi/\lambda$ and unknown complex functions – the *amplification factors* – $\xi(k)$.
Solutions of the difference equation can be written as linear superpositions of these basis functions. But they are only stable if the eigenmodes are stable, i.e., will not grow in time (with $j$). This is the case when
$$
|\xi(k)| < 1
$$
for all $k$.
Insert the eigenmodes into the finite difference equation
$$
T_{m, j+1} = (1 - 2\eta) T_{m,j} + \eta(T_{m+1,j} + T_{m-1, j})
$$
to obtain
\begin{align}
\xi(k)^{j+1} e^{ikm\Delta x} &= (1 - 2\eta) \xi(k)^{j} e^{ikm\Delta x}
+ \eta(\xi(k)^{j} e^{ik(m+1)\Delta x} + \xi(k)^{j} e^{ik(m-1)\Delta x})\\
\xi(k) &= (1 - 2\eta) + \eta(e^{ik\Delta x} + e^{-ik\Delta x})\\
\xi(k) &= 1 - 2\eta + 2\eta \cos k\Delta x\\
\xi(k) &= 1 + 2\eta\big(\cos k\Delta x - 1\big)
\end{align}
For $|\xi(k)| < 1$ (and all possible $k$):
\begin{align}
|\xi(k)| < 1 \quad &\Leftrightarrow \quad \xi^2(k) < 1\\
(1 + 2y)^2 = 1 + 4y + 4y^2 &< 1 \quad \text{with}\ \ y = \eta(\cos k\Delta x - 1)\\
y(1 + y) &< 0 \quad \Leftrightarrow \quad -1 < y < 0\\
\eta(\cos k\Delta x - 1) &\leq 0 \quad \forall k \quad (\eta > 0, -1 \leq \cos x \leq 1)\\
\eta(\cos k\Delta x - 1) &> -1\\
\eta &< \frac{1}{1 - \cos k\Delta x}\\
\eta = \frac{K \Delta t}{C \rho \Delta x^2} &< \frac{1}{2} \le \frac{1}{1 - \cos k\Delta x}
\end{align}
Thus, solutions are only stable for $\eta < 1/2$. In particular, decreasing $\Delta t$ will always improve stability, But decreasing $\Delta x$ requires an quadratic *increase* in $\Delta t$!
Note
* Perform von Neumann stability analysis when possible (depends on PDE and the specific discretization).
* Test different combinations of $\Delta t$ and $\Delta x$.
* Not guarantee that decreasing both will lead to more stable solutions!
Check my inputs:
This was stable and it conforms to the stability criterion:
```
Dt = 2
Dx = 0.02
eta = Kappa * Dt /(CHeat * rho * Dx*Dx)
print(eta)
```
... and this was unstable, despite a seemingly small change:
```
Dt = 2
Dx = 0.01
eta = Kappa * Dt /(CHeat * rho * Dx*Dx)
print(eta)
```
| github_jupyter |
# Build a sklearn Pipeline for a to ML contest submission
In the ML_coruse_train notebook we at first analyzed the housing dataset to gain statistical insights and then e.g. features added new,
replaced missing values and scaled the colums using pandas dataset methods.
In the following we will use sklearn [Pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) to integrate all these steps into one final *estimator*. The resulting pipeline can be used for saving an ML estimator to a file and use it later for production.
*Optional:*
If you want, you can save your estimator as explained in the last cell at the bottom of this notebook.
Based on a hidden dataset, it's performance will then be ranked against all other submissions.
```
# read housing data again
import pandas as pd
import numpy as np
housing = pd.read_csv("datasets/housing/housing.csv")
# Try to get header information of the dataframe:
housing.head()
```
One remark: sklearn transformers do **not** act on pandas dataframes. Instead, they use numpy arrays.
Now try to [convert](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html) a dataframe to a numpy array:
```
housing.head().to_numpy()
```
As you can see, the column names are lost now.
In a numpy array, columns indexed using integers and no more by their names.
### Add extra feature columns
At first, we again add some extra columns (e.g. `rooms_per_household, population_per_household, bedrooms_per_household`) which might correlate better with the predicted parameter `median_house_value`.
For modifying the dataset, we now use a [FunctionTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.FunctionTransformer.html), which we later can put into a pipeline.
Hints:
* For finding the index number of a given column name, you can use the method [get_loc()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html)
* For concatenating the new columns with the given array, you can use numpy method [c_](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html)
```
from sklearn.preprocessing import FunctionTransformer
# At first, get the indexes as integers from the column names:
rooms_ix = housing.columns.get_loc("total_rooms")
bedrooms_ix =
population_ix =
household_ix =
# Now implement a function which takes a numpy array a argument and adds the new feature columns
def add_extra_features(X):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household =
bedrooms_per_household =
# Concatenate the original array X with the new columns
return
attr_adder = FunctionTransformer(add_extra_features, validate = False)
housing_extra_attribs = attr_adder.fit_transform(housing.values)
assert housing_extra_attribs.shape == (17999, 13)
housing_extra_attribs
```
### Imputing missing elements
For replacing nan values in the dataset with the mean or median of the column they are in, you can also use a [SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) :
```
from sklearn.impute import SimpleImputer
# Drop the categorial column ocean_proximity
housing_num = housing.drop(...)
print("We have %d nan elements in the numerical columns" %np.count_nonzero(np.isnan(housing_num.to_numpy())))
imp_mean = ...
housing_num_cleaned = imp_mean.fit_transform(housing_num)
assert np.count_nonzero(np.isnan(housing_num_cleaned)) == 0
housing_num_cleaned[1,:]
```
### Column scaling
For scaling and normalizing the columns, you can use the class [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
Use numpy [mean](https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html) and [std](https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html) to calculate the mean and standard deviation of each column (Hint: columns are axis = 0! ) after scaling.
```
from sklearn.preprocessing import StandardScaler
scaler = ...
scaled = scaler.fit_transform(housing_num_cleaned)
print("mean of the columns is: " , ...)
print("standard deviation of the columns is: " , ...)
```
### Putting all preprocessing steps together
Now let's build a pipeline for preprocessing the **numerical** attributes.
The pipeline shall process the data in the following steps:
* [Impute](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) median or mean values for elements which are NaN
* Add attributes using the FunctionTransformer with the function add_extra_features().
* Scale the numerical values using the [StandardScaler()](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
```
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('give a name', ...), # Imputer
('give a name', ...), # FunctionTransformer
('give a name', ...), # Scaler
])
# Now test the pipeline on housing_num
num_pipeline.fit_transform(housing_num)
```
Now we have a pipeline for the numerical columns.
But we still have a categorical column:
```
housing['ocean_proximity'].head()
```
We need one more pipeline for the categorical column. Instead of the "Dummy encoding" we used before, we now use the [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) from sklearn.
Hint: to make things easier, set the sparse option of the OneHotEncoder to False.
```
from sklearn.preprocessing import OneHotEncoder
housing_cat = housing[] #get the right column
cat_encoder =
housing_cat_1hot = cat_encoder.fit_transform(housing_cat)
housing_cat_1hot
```
We have everything we need for building a preprocessing pipeline which transforms the columns including all the steps before.
Since we have columns where different transformations should be applied, we use the class [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html)
```
from sklearn.compose import ColumnTransformer
# These are the columns with the numerical features:
num_attribs = ["longitude", ...]
# Here are the columns with categorical features:
cat_attribs = [...]
full_prep_pipeline = ColumnTransformer([
("give a name", ..., ...), # Add the numerical pipeline and specify the columns it should work on
("give a name", ..., ...), # Add a OneHotEncoder and specify the columns it should work on
])
full_prep_pipeline.fit_transform(housing)
```
### Train an estimator
Include `full_prep_pipeline` into a further pipeline where it is followed by an RandomForestRegressor.
This way, at first our data is prepared using `full_prep_pipeline` and then the RandomForestRegressor is trained on it.
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
full_pipeline_with_predictor = Pipeline([
("give a name", full_prep_pipeline), # add the full_prep_pipeline
("give a name", RandomForestRegressor()) # Add a RandomForestRegressor
])
```
For training the regressor, seperate the label colum (`median_house_value`) and feature columns (all other columns).
Split the data into a training and testing dataset using train_test_split.
```
# Create two dataframes, one for the labels one for the features
housing_features = housing...
housing_labels = housing
# Split the two dataframes into a training and a test dataset
X_train, X_test, y_train, y_test = train_test_split(housing_features, housing_labels, test_size = 0.20)
# Now train the full_pipeline_with_predictor on the training dataset
full_pipeline_with_predictor.fit(X_train, y_train)
```
As usual, calculate some score metrics:
```
from sklearn.metrics import mean_squared_error
y_pred = full_pipeline_with_predictor.predict(X_test)
tree_mse = mean_squared_error(y_pred, y_test)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
from sklearn.metrics import r2_score
r2_score(y_pred, y_test)
```
Use the [pickle serializer](https://docs.python.org/3/library/pickle.html) to save your estimator to a file for contest participation.
```
import pickle
import getpass
from sklearn.utils.validation import check_is_fitted
your_regressor = ... # Put your regression pipeline here
assert isinstance(your_regressor, Pipeline)
pickle.dump(your_regressor, open(getpass.getuser() + "s_model.p", "wb" ) )
```
| github_jupyter |
# Running the Direct Fidelity Estimation (DFE) algorithm
This example walks through the steps of running the direct fidelity estimation (DFE) algorithm as described in these two papers:
* Direct Fidelity Estimation from Few Pauli Measurements (https://arxiv.org/abs/1104.4695)
* Practical characterization of quantum devices without tomography (https://arxiv.org/abs/1104.3835)
Optimizations for Clifford circuits are based on a tableau-based simulator:
* Improved Simulation of Stabilizer Circuits (https://arxiv.org/pdf/quant-ph/0406196.pdf)
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
# Import Cirq, DFE, and create a circuit
import cirq
from cirq.contrib.svg import SVGCircuit
import examples.direct_fidelity_estimation as dfe
qubits = cirq.LineQubit.range(3)
circuit = cirq.Circuit(cirq.CNOT(qubits[0], qubits[2]),
cirq.Z(qubits[0]),
cirq.H(qubits[2]),
cirq.CNOT(qubits[2], qubits[1]))
SVGCircuit(circuit)
# We then create a sampler. For this example, we use a simulator but the code can accept a hardware sampler.
noise = cirq.ConstantQubitNoiseModel(cirq.depolarize(0.1))
sampler = cirq.DensityMatrixSimulator(noise=noise)
# We run the DFE:
estimated_fidelity, intermediate_results = dfe.direct_fidelity_estimation(
circuit,
qubits,
sampler,
n_measured_operators=None, # None=returns all the Pauli strings
samples_per_term=0) # 0=use dense matrix simulator
print('Estimated fidelity: %.2f' % (estimated_fidelity))
```
# What is happening under the hood?
Now, let's look at the `intermediate_results` and correlate what is happening in the code with the papers. The definition of fidelity is:
$$
F = F(\hat{\rho},\hat{\sigma}) = \mathrm{Tr} \left(\hat{\rho} \hat{\sigma}\right)
$$
where $\hat{\rho}$ is the theoretical pure state and $\hat{\sigma}$ is the actual state. The idea of DFE is to write fidelity as:
$$F= \sum _i \frac{\rho _i \sigma _i}{d}$$
where $d=4^{\mathit{number-of-qubits}}$, $\rho _i = \mathrm{Tr} \left( \hat{\rho} P_i \right)$, and $\sigma _i = \mathrm{Tr} \left(\hat{\sigma} P_i \right)$. Each of the $P_i$ is a Pauli operator. We can then finally rewrite the fidelity as:
$$F= \sum _i Pr(i) \frac{\sigma _i}{\rho_i}$$
with $Pr(i) = \frac{\rho_i ^2}{d}$, which is a probability-like set of numbers (between 0.0 and 1.0 and they add up to 1.0).
One important question is how do we choose these Pauli operators $P_i$? It depends on whether the circuit is Clifford or not. In case it is, we know that there are "only" $2^{\mathit{number-of-qubits}}$ operators for which $Pr(i)$ is non-zero. In fact, we know that they are all equiprobable with $Pr(i) = \frac{1}{2^{\mathit{number-of-qubits}}}$. The code does detect the Cliffordness automatically and switches to this mode. In case the circuit is not Clifford, the code just uses all the operators.
Let's inspect that in the case of our example, we do see the Pauli operators with equiprobability (i.e. the $\rho_i$):
```
for pauli_trace in intermediate_results.pauli_traces:
print('Probability %.3f\tPauli: %s' % (pauli_trace.Pr_i, pauli_trace.P_i))
```
Yay! We do see 8 entries (we have 3 qubits) with all the same 1/8 probability. What if we had a 23 qubit circuit? In this case, that would be quite many of them. That is where the parameter `n_measured_operators` becomes useful. If it is set to `None` we return *all* the Pauli strings (regardless of whether the circuit is Clifford or not). If set to an integer, we randomly sample the Pauli strings.
Then, let's actually look at the measurements, i.e. $\sigma_i$:
```
for trial_result in intermediate_results.trial_results:
print('rho_i=%.3f\tsigma_i=%.3f\tPauli:%s' % (trial_result.pauli_trace.rho_i, trial_result.sigma_i, trial_result.pauli_trace.P_i))
```
How are these measurements chosen? Since we had set `n_measured_operators=None`, all the measurements are used. If we had set the parameter to an integer, we would only have a subset to start from. We would then, as per the algorithm, sample from this set with replacement according to the probability distribution of $Pr(i)$ (for Clifford circuits, the probabilities are all the same, but for non-Clifford circuits, it means we favor more probable Pauli strings).
What about the parameter `samples_per_term`? Remember that the code can handle both a sampler or use a simulator. If we use a sampler, then we can repeat the measurements `samples_per_term` times. In our case, we use a dense matrix simulator and thus we keep that parameter set to `0`.
# How do we bound the variance of the fidelity when the circuit is Clifford?
Recall that the formula for DFE is:
$$F= \sum _i Pr(i) \frac{\sigma _i}{\rho_i}$$
But for Clifford circuits, we have $Pr(i) = \frac{1}{d}$ and $\rho_i = 1$ and thus the formula becomes:
$$F= \frac{1}{d} \sum _i \sigma _i$$
If we estimate by randomly sampling $N$ values for the indicies $i$ for $\sigma_i$ we get:
$$\hat{F} = \frac{1}{N} \sum_{j=1}^N \sigma _{i(j)}$$
Using the Bhatia–Davis inequality ([A Better Bound on the Variance, Rajendra Bhatia and Chandler Davis](https://www.jstor.org/stable/2589180)) and the fact that $0 \le \sigma_i \le 1$, we have the variance of:
$$\mathrm{Var}\left[ \hat{F} \right] \le \frac{(1 - F)F}{N}$$
$$\mathrm{StdDev}\left[ \hat{F} \right] \le \sqrt{\frac{(1 - F)F}{N}}$$
In particular, since $0 \le F \le 1$ we have:
$$\mathrm{StdDev}\left[ \hat{F} \right] \le \sqrt{\frac{(1 - \frac{1}{2})\frac{1}{2}}{N}}$$
$$\mathrm{StdDev}\left[ \hat{F} \right] \le \frac{1}{2 \sqrt{N}}$$
| github_jupyter |
# Gujarati with CLTK
See how you can analyse your Gujarati texts with <b>CLTK</b> ! <br>
Let's begin by adding the `USER_PATH`..
```
import os
USER_PATH = os.path.expanduser('~')
```
In order to be able to download Gujarati texts from CLTK's Github repo, we will require an importer.
```
from cltk.corpus.utils.importer import CorpusImporter
gujarati_downloader = CorpusImporter('gujarati')
```
We can now see the corpora available for download, by using `list_corpora` feature of the importer. Let's go ahead and try it out!
```
gujarati_downloader.list_corpora
```
The corpus <i>gujarati_text_wikisource</i> can be downloaded from the Github repo. The corpus will be downloaded to the directory `cltk_data/gujarati` at the above mentioned `USER_PATH`
```
gujarati_downloader.import_corpus('gujarati_text_wikisource')
```
You can see the texts downloaded by doing the following, or checking out the `cltk_data/gujarati/text/gujarati_text_wikisource` directory.
```
gujarati_corpus_path = os.path.join(USER_PATH,'cltk_data/gujarati/text/gujarati_text_wikisource')
list_of_texts = [text for text in os.listdir(gujarati_corpus_path) if '.' not in text]
print(list_of_texts)
```
Great, now that we have our texts, let's take a sample from one of them. For this tutorial, we shall be using <i>govinda_khele_holi</i> , a text by the Gujarati poet Narsinh Mehta.
```
gujarati_text_path = os.path.join(gujarati_corpus_path,'narsinh_mehta/govinda_khele_holi.txt')
gujarati_text = open(gujarati_text_path,'r').read()
print(gujarati_text)
```
## Gujarati Alphabets
There are 13 vowels, 33 consonants, which are grouped as follows:
```
from cltk.corpus.gujarati.alphabet import *
print("Digits:",DIGITS)
print("Vowels:",VOWELS)
print("Dependent vowels:",DEPENDENT_VOWELS)
print("Consonants:",CONSONANTS)
print("Velar consonants:",VELAR_CONSONANTS)
print("Palatal consonants:",PALATAL_CONSONANTS)
print("Retroflex consonants:",RETROFLEX_CONSONANTS)
print("Dental consonants:",DENTAL_CONSONANTS)
print("Labial consonants:",LABIAL_CONSONANTS)
print("Sonorant consonants:",SONORANT_CONSONANTS)
print("Sibilant consonants:",SIBILANT_CONSONANTS)
print("Guttural consonant:",GUTTURAL_CONSONANT)
print("Additional consonants:",ADDITIONAL_CONSONANTS)
print("Modifiers:",MODIFIERS)
```
## Transliterations
We can transliterate Gujarati scripts to that of other Indic languages. Let us transliterate `કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે`to Kannada:
```
gujarati_text_two = 'કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે'
from cltk.corpus.sanskrit.itrans.unicode_transliterate import UnicodeIndicTransliterator
UnicodeIndicTransliterator.transliterate(gujarati_text_two,"gu","kn")
```
We can also romanize the text as shown:
```
from cltk.corpus.sanskrit.itrans.unicode_transliterate import ItransTransliterator
ItransTransliterator.to_itrans(gujarati_text_two,'gu')
```
Similarly, we can indicize a text given in its ITRANS-transliteration
```
gujarati_text_itrans = 'bhaawanaa'
ItransTransliterator.from_itrans(gujarati_text_itrans,'gu')
```
## Syllabifier
We can use the indian_syllabifier to syllabify the Gujarati sentences. To do this, we will have to import models as follows. The importing of `sanskrit_models_cltk` might take some time.
```
phonetics_model_importer = CorpusImporter('sanskrit')
phonetics_model_importer.list_corpora
phonetics_model_importer.import_corpus('sanskrit_models_cltk')
```
Now we import the syllabifier and syllabify as follows:
```
%%capture
from cltk.stem.sanskrit.indian_syllabifier import Syllabifier
gujarati_syllabifier = Syllabifier('gujarati')
gujarati_syllables = gujarati_syllabifier.orthographic_syllabify('ભાવના')
```
The syllables of the word `ભાવના` will thus be:
```
print(gujarati_syllables)
```
| github_jupyter |
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
omega = np.zeros((2*N + 2*num_landmarks, 2*N + 2*num_landmarks))
omega[0,0] = 1
omega[1,1] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
xi = np.zeros((2*N + 2*num_landmarks, 1))
xi[0] = world_size/2
xi[1] = world_size/2
return omega, xi
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
for t in range(N-1):
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
#print("data: ", len(data), data[t][0])
measurements = data[t][0]
for m in measurements:
Lnum = m[0]
Ldx = m[1]
Ldy = m[2]
omega[2*t+0] [2*t+0] += 1/measurement_noise
omega[2*t+1] [2*t+1] += 1/measurement_noise
omega[2*t+0] [2*(N+Lnum)+0] += -1/measurement_noise
omega[2*t+1] [2*(N+Lnum)+1] += -1/measurement_noise
omega[2*(N+Lnum)+0][2*t+0] += -1/measurement_noise
omega[2*(N+Lnum)+1][2*t+1] += -1/measurement_noise
omega[2*(N+Lnum)+0][2*(N+Lnum)+0] += 1/measurement_noise
omega[2*(N+Lnum)+1][2*(N+Lnum)+1] += 1/measurement_noise
xi[2*t+0] += -Ldx/measurement_noise
xi[2*t+1] += -Ldy/measurement_noise
xi[2*(N+Lnum)+0] += Ldx/measurement_noise
xi[2*(N+Lnum)+1] += Ldy/measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
motion = data[t][1]
omega[2*t+0][2*t+0] += 1/motion_noise
omega[2*t+1][2*t+1] += 1/motion_noise
omega[2*t+0][2*t+2] += -1/motion_noise
omega[2*t+1][2*t+3] += -1/motion_noise
omega[2*t+2][2*t+0] += -1/motion_noise
omega[2*t+3][2*t+1] += -1/motion_noise
omega[2*t+2][2*t+2] += 1/motion_noise
omega[2*t+3][2*t+3] += 1/motion_noise
xi[2*t+0] += -motion[0]/motion_noise
xi[2*t+2] += motion[0]/motion_noise
xi[2*t+1] += -motion[1]/motion_noise
xi[2*t+3] += motion[1]/motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
mu = np.linalg.inv(np.matrix(omega)) * xi
return mu # return `mu`
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
**Answer**: The true value of the final pose is [x=69.61429 y=95.52181], and it is close to the estimated pose [67.357, 93.716] in my slam implementation.
And the true landmarks are [12, 44], [62, 98], [19, 13], [45, 12], [7, 97] while the estimated are [11.692, 44.036], [61.744, 96.855], [19.061, 12.781], [44.483, 11.522], [6.063, 96.744].
If we moved and sensed more, the results becomes more accurate. And if we had lower noise parameters, then I can have more acculate results than higher noise parameters.
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
```
| github_jupyter |
### In this notebook we investigate a designed simple Inception network on PDU data
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
### Importing the libraries
```
import torch
import torch.nn as nn
import torch.utils.data as Data
from torch.autograd import Function, Variable
from torch.optim import lr_scheduler
import torchvision
import torchvision.transforms as transforms
import torch.backends.cudnn as cudnn
from pathlib import Path
import os
import copy
import math
import matplotlib.pyplot as plt
import numpy as np
from datetime import datetime
import time as time
import warnings
```
#### Checking whether the GPU is active
```
torch.backends.cudnn.enabled
torch.cuda.is_available()
torch.cuda.init()
```
#### Dataset paths
```
PATH = Path("/home/saman/Saman/data/PDU_Raw_Data01/Test06_600x30/")
train_path = PATH / 'train' / 'Total'
valid_path = PATH / 'valid' / 'Total'
test_path = PATH / 'test' / 'Total'
```
### Model parameters
```
Num_Filter1= 16
Num_Filter2= 64
Ker_Sz1 = 5
Ker_Sz2 = 5
learning_rate= 0.0001
Dropout= 0.2
BchSz= 32
EPOCH= 5
```
### Data Augmenation
```
# Mode of transformation
transformation = transforms.Compose([
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0,0,0), (0.5,0.5,0.5)),
])
transformation2 = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0,0,0), (0.5,0.5,0.5)),
])
# Loss calculator
criterion = nn.CrossEntropyLoss() # cross entropy loss
```
### Defining models
#### Defining a class of our simple model
```
class ConvNet(nn.Module):
def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d( # input shape (3, 30, 600)
in_channels=3, # input height
out_channels=Num_Filter1, # n_filters
kernel_size=Ker_Sz1, # Kernel size
stride=1, # filter movement/step
padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,
), # padding=(kernel_size-1)/2 if stride=1
nn.BatchNorm2d(Num_Filter1), # Batch Normalization
nn.ReLU(), # Rectified linear activation
nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area,
# Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
self.layer2 = nn.Sequential(
nn.Conv2d(Num_Filter1, Num_Filter2,
kernel_size=Ker_Sz2,
stride=1,
padding=int((Ker_Sz2-1)/2)),
nn.BatchNorm2d(Num_Filter2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)
nn.Dropout2d(p=Dropout))
self.fc = nn.Linear(1050*Num_Filter2, num_classes) # fully connected layer, output 2 classes
def forward(self, x): # Forwarding the data to classifier
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)
out = self.fc(out)
return out
```
### Defining inception classes
```
class BasicConv2d(nn.Module):
def __init__(self, in_planes, out_planes, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_planes, out_planes, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_planes, eps=0.001)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
out = self.relu(x)
return x
class Inception(nn.Module):
def __init__(self, in_channels):
super(Inception, self).__init__()
self.branch3x3 = BasicConv2d(in_channels, 384, kernel_size=3, stride=2)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, stride=2)
self.avgpool = nn.AvgPool2d(kernel_size=3, stride=2)
def forward(self, x):
branch3x3 = self.branch3x3(x)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = self.avgpool(x)
outputs = [branch3x3, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
class Inception_Net(nn.Module):
def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):
super(Inception_Net, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d( # input shape (3, 30, 600)
in_channels=3, # input height
out_channels=Num_Filter1, # n_filters
kernel_size=Ker_Sz1, # Kernel size
stride=1, # filter movement/step
padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,
), # padding=(kernel_size-1)/2 if stride=1
nn.BatchNorm2d(Num_Filter1), # Batch Normalization
nn.ReLU(), # Rectified linear activation
nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area,
# Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
self.layer2 = nn.Sequential(
nn.Conv2d(Num_Filter1, Num_Filter2,
kernel_size=Ker_Sz2,
stride=1,
padding=int((Ker_Sz2-1)/2)),
nn.BatchNorm2d(Num_Filter2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)
nn.Dropout2d(p=Dropout))
self.Inception = Inception(Num_Filter2)
self.fc = nn.Linear(120768, num_classes) # fully connected layer, output 2 classes
def forward(self, x): # Forwarding the data to classifier
out = self.layer1(x)
out = self.layer2(out)
out = self.Inception(out)
out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)
out = self.fc(out)
return out
```
### Finding number of parameter in our model
```
def print_num_params(model):
TotalParam=0
for param in list(model.parameters()):
print("Individual parameters are:")
nn=1
for size in list(param.size()):
print(size)
nn = nn*size
print("Total parameters: {}" .format(param.numel()))
TotalParam += nn
print('-' * 10)
print("Sum of all Parameters is: {}" .format(TotalParam))
def get_num_params(model):
TotalParam=0
for param in list(model.parameters()):
nn=1
for size in list(param.size()):
nn = nn*size
TotalParam += nn
return TotalParam
```
### Training and Validating
#### Training and validation function
```
def train_model(model, criterion, optimizer, Dropout, learning_rate, BATCHSIZE, num_epochs):
print(str(datetime.now()).split('.')[0], "Starting training and validation...\n")
print("====================Data and Hyperparameter Overview====================\n")
print("Number of training examples: {} , Number of validation examples: {} \n".format(len(train_data), len(valid_data)))
print("Dropout:{:,.2f}, Learning rate: {:,.5f} "
.format( Dropout, learning_rate ))
print("Batch size: {}, Number of epochs: {} "
.format(BATCHSIZE, num_epochs))
print("Number of parameter in the model: {}". format(get_num_params(model)))
print("================================Results...==============================\n")
since = time.time() #record the beginning time
best_model = model
best_acc = 0.0
acc_vect =[]
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images).cuda()
labels = Variable(labels).cuda()
# Forward pass
outputs = model(images) # model output
loss = criterion(outputs, labels) # cross entropy loss
# Trying binary cross entropy
#loss = criterion(torch.max(outputs.data, 1), labels)
#loss = torch.nn.functional.binary_cross_entropy(outputs, labels)
# Backward and optimize
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if (i+1) % 1000 == 0: # Reporting the loss and progress every 50 step
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, len(train_loader), loss.item()))
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in valid_loader:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss += loss.item()
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss= loss / total
epoch_acc = 100 * correct / total
acc_vect.append(epoch_acc)
if epoch_acc > best_acc:
best_acc = epoch_acc
best_model = copy.deepcopy(model)
print('Validation accuracy and loss of the model on {} images: {} %, {:.5f}'
.format(len(valid_data), 100 * correct / total, loss))
correct = 0
total = 0
for images, labels in train_loader:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss += loss.item()
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss= loss / total
epoch_acc = 100 * correct / total
print('Train accuracy and loss of the model on {} images: {} %, {:.5f}'
.format(len(train_data), epoch_acc, loss))
print('-' * 10)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best validation Acc: {:4f}'.format(best_acc))
mean_acc = np.mean(acc_vect)
print('Average accuracy on the validation {} images: {}'
.format(len(train_data),mean_acc))
print('-' * 10)
return best_model, mean_acc
```
### Testing function
```
def test_model(model, test_loader):
print("Starting testing...\n")
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
test_loss_vect=[]
test_acc_vect=[]
since = time.time() #record the beginning time
for i in range(10):
Indx = torch.randperm(len(test_data))
Cut=int(len(Indx)/10) # Here 10% showing the proportion of data is chosen for pooling
indices=Indx[:Cut]
Sampler = Data.SubsetRandomSampler(indices)
pooled_data = torch.utils.data.DataLoader(test_data , batch_size=BchSz,sampler=Sampler)
for images, labels in pooled_data:
images = Variable(images).cuda()
labels = Variable(labels).cuda()
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
total += labels.size(0)
correct += (predicted == labels).sum().item()
test_loss= loss / total
test_accuracy= 100 * correct / total
test_loss_vect.append(test_loss)
test_acc_vect.append(test_accuracy)
# print('Test accuracy and loss for the {}th pool: {:.2f} %, {:.5f}'
# .format(i+1, test_accuracy, test_loss))
mean_test_loss = np.mean(test_loss_vect)
mean_test_acc = np.mean(test_acc_vect)
std_test_acc = np.std(test_acc_vect)
print('-' * 10)
print('Average test accuracy on test data: {:.2f} %, loss: {:.5f}, Standard deviion of accuracy: {:.4f}'
.format(mean_test_acc, mean_test_loss, std_test_acc))
print('-' * 10)
time_elapsed = time.time() - since
print('Testing complete in {:.1f}m {:.4f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('-' * 10)
return mean_test_acc, mean_test_loss, std_test_acc
```
### Applying aumentation and batch size
```
## Using batch size to load data
train_data = torchvision.datasets.ImageFolder(train_path,transform=transformation)
train_loader =torch.utils.data.DataLoader(train_data, batch_size=BchSz, shuffle=True,
num_workers=8)
valid_data = torchvision.datasets.ImageFolder(valid_path,transform=transformation)
valid_loader =torch.utils.data.DataLoader(valid_data, batch_size=BchSz, shuffle=True,
num_workers=8)
test_data = torchvision.datasets.ImageFolder(test_path,transform=transformation2)
test_loader =torch.utils.data.DataLoader(test_data, batch_size=BchSz, shuffle=True,
num_workers=8)
model = Inception_Net(Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2)
model = model.cuda()
print(model)
# Defining optimizer with variable learning rate
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
optimizer.scheduler=lr_scheduler.ReduceLROnPlateau(optimizer, 'min')
get_num_params(model)
seed= [1, 3, 7, 19, 22]
val_acc_vect=[]
test_acc_vect=[]
for ii in seed:
torch.cuda.manual_seed(ii)
torch.manual_seed(ii)
model, val_acc= train_model(model, criterion, optimizer, Dropout, learning_rate, BchSz, EPOCH)
testing = test_model (model, test_loader)
test_acc= testing[0]
val_acc_vect.append( val_acc )
test_acc_vect.append(test_acc)
mean_val_acc = np.mean(val_acc_vect)
mean_test_acc = np.mean(test_acc_vect)
print('-' * 10)
print('-' * 10)
print('Average of validation accuracies on 5 different random seed: {:.2f} %, Average of testing accuracies on 5 different random seed: {:.2f} %'
.format(mean_val_acc, mean_test_acc))
```
| github_jupyter |
### Import all needed package
```
import os
import ast
import numpy as np
import pandas as pd
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense, Activation, LSTM, Dropout
from keras.utils import to_categorical
from keras.datasets import mnist
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from tensorflow.python.keras.callbacks import ModelCheckpoint, TensorBoard
import context
build = context.build_promoter
construct = context.construct_neural_net
encode = context.encode_sequences
organize = context.organize_data
ROOT_DIR = os.getcwd()[:os.getcwd().rfind('Express')] + 'ExpressYeaself/'
SAVE_DIR = ROOT_DIR + 'expressyeaself/models/lstm/saved_models/'
ROOT_DIR
```
### Define the input data
#### Using the full data set
```
sample_filename = ('10000_from_20190612130111781831_percentiles_els_binarized_homogeneous_deflanked_'
'sequences_with_exp_levels.txt.gz')
```
#### Define the absolute path
```
sample_path = ROOT_DIR + 'example/processed_data/' + sample_filename
```
### Encode sequences
```
# Seems to give slightly better accuracy when expression level values aren't scaled.
scale_els = False
X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)
num_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path)
```
### Bulid the 3 dimensions LSTM model
#### Reshape encoded sequences
```
X_padded = X_padded.reshape(-1)
X_padded = X_padded.reshape(int(num_seqs), 1, 5 * int(max_sequence_len))
```
#### Reshape expression levels
```
y_scaled = y_scaled.reshape(len(y_scaled), 1, 1)
```
#### Perform a train-test split
```
test_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size)
```
#### Build the model
```
# Define the model parameters
batch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data
epochs = 50
dropout = 0.3
learning_rate = 0.01
# Define the checkpointer to allow saving of models
model_type = 'lstm_sequential_3d_onehot'
save_path = SAVE_DIR + model_type + '.hdf5'
checkpointer = ModelCheckpoint(monitor='val_acc',
filepath=save_path,
verbose=1,
save_best_only=True)
# Define the model
model = Sequential()
# Build up the layers
model.add(Dense(1024, kernel_initializer='uniform', input_shape=(1,5*int(max_sequence_len),)))
model.add(Activation('softmax'))
model.add(Dropout(dropout))
# model.add(Dense(512, kernel_initializer='uniform', input_shape=(1,1024,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
model.add(Dense(256, kernel_initializer='uniform', input_shape=(1,512,)))
model.add(Activation('softmax'))
model.add(Dropout(dropout))
# model.add(Dense(128, kernel_initializer='uniform', input_shape=(1,256,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(64, kernel_initializer='uniform', input_shape=(1,128,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(32, kernel_initializer='uniform', input_shape=(1,64,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(16, kernel_initializer='uniform', input_shape=(1,32,)))
# model.add(Activation('softmax'))
# model.add(Dropout(dropout))
# model.add(Dense(8, kernel_initializer='uniform', input_shape=(1,16,)))
# model.add(Activation('softmax'))
model.add(LSTM(units=1, return_sequences=True))
sgd = optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
# Compile the model
model.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy'])
# Print model summary
print(model.summary())
# model.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))
# model.add(Dropout(dropout))
# model.add(Dense(50, activation='sigmoid'))
# # model.add(Dense(25, activation='sigmoid'))
# # model.add(Dense(12, activation='sigmoid'))
# # model.add(Dense(6, activation='sigmoid'))
# # model.add(Dense(3, activation='sigmoid'))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mse',
# optimizer='rmsprop',
# metrics=['accuracy'])
# print(model.summary())
```
### Fit and Evaluate the model
```
# Fit
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=eposhs,verbose=1,
validation_data=(X_test, y_test), callbacks=[checkpointer])
# Evaluate
score = max(history.history['val_acc'])
print("%s: %.2f%%" % (model.metrics_names[1], score*100))
plt = construct.plot_results(history.history)
plt.show()
```
### Bulid the 2 dimensions LSTM model
As for the data we have, we only have 1 output and that means we only have 1 time step, if we can delete that dimension in that model, then we can have a 2 dimensions LSTM model.
#### Load the data again
```
X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)
num_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path)
test_size = 0.25
X_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size)
```
#### Build up the model
```
# Define the model parameters
batch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data
epochs = 50
dropout = 0.3
learning_rate = 0.01
# Define the checkpointer to allow saving of models
model_type = 'lstm_sequential_2d_onehot'
save_path = SAVE_DIR + model_type + '.hdf5'
checkpointer = ModelCheckpoint(monitor='val_acc',
filepath=save_path,
verbose=1,
save_best_only=True)
# Define the model
model = Sequential()
# Build up the layers
model.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))
model.add(Dropout(dropout))
model.add(Dense(50, activation='sigmoid'))
# model.add(Dense(25, activation='sigmoid'))
# model.add(Dense(12, activation='sigmoid'))
# model.add(Dense(6, activation='sigmoid'))
# model.add(Dense(3, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
print(model.summary())
```
### Fit and Evaluate the model
```
# Fit
history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs,verbose=1,
validation_data=(X_test, y_test), callbacks=[checkpointer])
# Evaluate
score = max(history.history['val_acc'])
print("%s: %.2f%%" % (model.metrics_names[1], score*100))
plt = construct.plot_results(history.history)
plt.show()
```
## Checking predictions on a small sample of native data
```
input_seqs = ROOT_DIR + 'expressyeaself/models/lstm/native_sample.txt'
model_to_use = 'lstm_sequential_2d'
lstm_result = construct.get_predictions_for_input_file(input_seqs, model_to_use, sort_df=True, write_to_file=False)
lstm_result.to_csv('lstm_result')
lstm_result
```
| github_jupyter |
<a href="https://colab.research.google.com/github/tjido/woodgreen/blob/master/Woodgreen_Data_Science_%26_Python_Nov_2021_Week_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<h1>Welcome to the Woodgreen Data Science & Python Program by Fireside Analytics</h1>
<h4>Data science is the process of ethically acquiring, engineering, analyzing, visualizaing and ultimately, creating value with data.
<p>In this tutorial, participants will be introduced to the Python programming language in this Python cloud environment called Google Colab.</p> </h4>
<p>For more information about this tutorial or other tutorials by Fireside Analytics, contact: [email protected]</p>
<h3><strong>Table of contents</h3>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li>How does a computer work?</li>
<Li>What is "data"?</li>
<li>An introduction to Python</li>
</ol>
</div>
<br>
<hr>
**Let's get started! Firstly, this page you are reading is not regular website, it is an interactive computer programming environment called a Colab notebook that lets you write and execute code in Python.**
# 1. How does a computer work?
## A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTS
EXAMPLES OF INPUTS
1. Keyboard
2. Mouse
3. Touch screen
PROCESSES
1. CPU - Central Processing Unit
2. Data storage
3. Converts inputs from words and numbers to 1s and 0s
4. Computes 1s and 0s
5. Produces outputs and information
OUTPUTS
1. Screen - words, numbers, pictures or sounds
2. Printer
3. Speaker
# 2. What is "data"?
## A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTS
1. Computers use many on and off switches to work
2. The 'on' switch is represented by a '1' and the 'off' switch is
3. A BIT is a one or a zero, and a BYTE is a combination of 8 ones and zeros e.g., 1100 0010
4. Combinations of Ones and Zeros in a computer, represent whole words and numbers, symbols and even pictures in the real world
5. Information stored in ones and zeros, in bits and bytes, is data!
* The letter a = 0110 0001
* The letter b = 0110 0010
* The letter A = 0100 0001
* The letter B = 0100 0010
* The symbol @ = 1000 0000
## This conversion is done with the ASCII Code, American Standard Code Information Interchange
*Computer programming is the process of giving a computer instructions in human readable language so a computer will know what to do in computer language.*
# 3. An introduction to Python
### Let's get to know Python. The following code is an example of a Python Progam. Run the code by clicking on the 'play' button and you will see the result of your program beneath the code.
```
## Your first computer progam can be to say hello!
print ("Hello, World")
# We will need to learn some syntax! Syntax are the words used in a Python program
# the '#' sign tells Python to ignore a line. We use it for notes that we want humans to read
# print() is a function built into the core of Python
# For more sophisticed operations we'll load libraries which come with additional functions that we can use
# Famous ones are numpy, pandas, matplotlib, seaborn, and scikitlearn
# Now, let's write some programs!
# Edit the line below to add your first name between the ""
## Here we assign the letters between "" to an object called "my_name" - it is now stored and you can call it later
## Like saving a number in your phone versus just typing it in and calling it
my_name = ""
# Let's see what we've created
my_name
greeting = "Hello, world, my name is "
# Let's look at it
greeting
# The = sign is what we call an 'assignment operator' and it assigns things
# See how we use the '+' sign
print(greeting + my_name)
# Asking for input, using simple function and printing it
def say_hello():
username = input("What is your name?\n")
print("Hello " + username)
# Lets call the function
say_hello()
# Creating an 'If else' conditional block inside the function. Here we are validating the response entered.
# If the person simply hits "Enter" without entering any value in the field,
# then the if statement prints "You can't introduce yourself if you don't add your name!"
# the == operator is used to test if something is equal to something else
def say_hello():
username = input("What is your name?\n")
if username == "":
print("You can't introduce yourself if you don't add your name!")
else:
print("Hello " + username)
# While calling the function, try leaving the field blank
say_hello()
# Dealing with a blank
def say_hello(name):
if name == "":
print("You can't introduce yourself if you don't add your name!")
else:
print(greeting + name)
# Click the "play" button to execute this code.
say_hello(my_name)
# In programming there are often many ways to do things, for example
print("Hello world, my name is " + my_name + ".")
```
# **We can do simple calculations in Python**
```
5 + 5
# Some actions already programmed in:
x = 5
print(x + 7)
# What happens when we say "X=5"
# x 'points' at the number 5
x = 5
print("Initial x is:", x)
# y now 'points' at 'x' which 'points' at 5, so then y points at 5
y = x
print("Initial y is:", y)
x = 6
# What happens when we now change what x is?
print("Current x is:", x)
print("Current y is:", y)
```
------------------------------------------------------------------------
**We can do complex calculations in Python** - Remember we said Netflix users stream 404,444 hours of movies every minute? Let's calculate how many days that is!
```
## In Python we create objects
## Converting from 404444 hours to days, we divide by___________?
days_watching_netflix = 404444/24
```
How can we do a survey in Python? We type 'input' to let Python know to wait for a user response. Once you type in the name, Python will remember it!
Press 'enter' after your input.
```
response_1 = input("Response 1: What is your name?")
## We can now look at the response
response_1
response_2 = input("Response 2: What is your name?")
response_3 = input("Response 3: What is your name?")
response_4 = input("Response 4: What is your name?")
response_5 = input("Response 5: What is your name?")
```
Let's look at response_5
```
print(response_1,
response_2,
response_3,
response_4,
response_5)
```
We can also add the names one at a time by typing them.
```
## Let's create an object for the 5 names from question 1
survey_names = [response_1, response_2, response_3, response_4, response_5]
## Let's look at the object we've just created!
survey_names
print(survey_names)
```
# Let's make a simple bar chart in Python
```
import matplotlib.pyplot as plt
x = ['A', 'B', 'C', 'D', 'E']
y = [22, 9, 40, 27, 55]
plt.bar(x, y, color = 'red')
plt.title('Simple Bar Chart')
plt.xlabel('Width Names')
plt.ylabel('Height Values')
plt.show()
# Replot the same chart and change the color of the bars
```
## Here's a sample chart with some survey responses.
```
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
data = [3,2]
labels = ['yes', 'no']
plt.xticks(range(len(data)), labels)
plt.xlabel('Responses')
plt.ylabel('Number of People')
plt.title('Shingai - Woodgreen Data Science & Python Program: Survey Results for Questions 2: "Do you know how a computer works?"')
plt.bar(range(len(data)), data, color = 'blue')
plt.show()
```
# Practice what you have learned
* Enter the results for Question 2 of your survey data and produce a chart
* Add your name to your chart heading
* Change the labels and headings of your charts
#Conclusion
1. Computer programming is a set of instructions we give a computer.
2. Computers must process the instructions in 'binary', in ones and zeros.
3. Anything 'digital' is data.
# Contact Information
Congratulations, you have completed a tutorial in the Python Programming language!
Fireside Analytics Inc. |
Instructor: Shingai Manjengwa (Twitter: @tjido) |
Woodgreen Community Services Summer Camp |
Contact: [email protected] or [www.firesideanalytics.com](www.firesideanalytics.com)
Never stop learning!
| github_jupyter |
<h1>CREAZIONE MODELLO SARIMA REGIONE SARDEGNA
```
import pandas as pd
df = pd.read_csv('../../csv/regioni/sardegna.csv')
df.head()
df['DATA'] = pd.to_datetime(df['DATA'])
df.info()
df=df.set_index('DATA')
df.head()
```
<h3>Creazione serie storica dei decessi totali della regione Sardegna
```
ts = df.TOTALE
ts.head()
from datetime import datetime
from datetime import timedelta
start_date = datetime(2015,1,1)
end_date = datetime(2020,9,30)
lim_ts = ts[start_date:end_date]
#visulizzo il grafico
import matplotlib.pyplot as plt
plt.figure(figsize=(12,6))
plt.title('Decessi mensili regione Sardegna dal 2015 a settembre 2020', size=20)
plt.plot(lim_ts)
for year in range(start_date.year,end_date.year+1):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.5)
```
<h3>Decomposizione
```
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
ts_trend = decomposition.trend #andamento della curva
ts_seasonal = decomposition.seasonal #stagionalità
ts_residual = decomposition.resid #parti rimanenti
plt.subplot(411)
plt.plot(ts,label='original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(ts_trend,label='trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(ts_seasonal,label='seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(ts_residual,label='residual')
plt.legend(loc='best')
plt.tight_layout()
```
<h3>Test di stazionarietà
```
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
critical_value = dftest[4]['5%']
test_statistic = dftest[0]
alpha = 1e-3
pvalue = dftest[1]
if pvalue < alpha and test_statistic < critical_value: # null hypothesis: x is non stationary
print("X is stationary")
return True
else:
print("X is not stationary")
return False
test_stationarity(ts)
```
<h3>Suddivisione in Train e Test
<b>Train</b>: da gennaio 2015 a ottobre 2019; <br />
<b>Test</b>: da ottobre 2019 a dicembre 2019.
```
from datetime import datetime
train_end = datetime(2019,10,31)
test_end = datetime (2019,12,31)
covid_end = datetime(2020,9,30)
from dateutil.relativedelta import *
tsb = ts[:test_end]
decomposition = seasonal_decompose(tsb, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
tsb_trend = decomposition.trend #andamento della curva
tsb_seasonal = decomposition.seasonal #stagionalità
tsb_residual = decomposition.resid #parti rimanenti
tsb_diff = pd.Series(tsb_trend)
d = 0
while test_stationarity(tsb_diff) is False:
tsb_diff = tsb_diff.diff().dropna()
d = d + 1
print(d)
#TEST: dal 01-01-2015 al 31-10-2019
train = tsb[:train_end]
#TRAIN: dal 01-11-2019 al 31-12-2019
test = tsb[train_end + relativedelta(months=+1): test_end]
```
<h3>Grafici di Autocorrelazione e Autocorrelazione Parziale
```
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(ts, lags =12)
plot_pacf(ts, lags =12)
plt.show()
```
<h2>Creazione del modello SARIMA sul Train
```
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train, order=(6,1,8))
model_fit = model.fit()
print(model_fit.summary())
```
<h4>Verifica della stazionarietà dei residui del modello ottenuto
```
residuals = model_fit.resid
test_stationarity(residuals)
plt.figure(figsize=(12,6))
plt.title('Confronto valori previsti dal modello con valori reali del Train', size=20)
plt.plot (train.iloc[1:], color='red', label='train values')
plt.plot (model_fit.fittedvalues.iloc[1:], color = 'blue', label='model values')
plt.legend()
plt.show()
conf = model_fit.conf_int()
plt.figure(figsize=(12,6))
plt.title('Intervalli di confidenza del modello', size=20)
plt.plot(conf)
plt.xticks(rotation=45)
plt.show()
```
<h3>Predizione del modello sul Test
```
#inizio e fine predizione
pred_start = test.index[0]
pred_end = test.index[-1]
#pred_start= len(train)
#pred_end = len(tsb)
#predizione del modello sul test
predictions_test= model_fit.predict(start=pred_start, end=pred_end)
plt.plot(test, color='red', label='actual')
plt.plot(predictions_test, label='prediction' )
plt.xticks(rotation=45)
plt.legend()
plt.show()
print(predictions_test)
# Accuracy metrics
import numpy as np
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE: errore percentuale medio assoluto
me = np.mean(forecast - actual) # ME: errore medio
mae = np.mean(np.abs(forecast - actual)) # MAE: errore assoluto medio
mpe = np.mean((forecast - actual)/actual) # MPE: errore percentuale medio
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr: correlazione tra effettivo e previsione
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax: errore min-max
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse,
'corr':corr, 'minmax':minmax})
forecast_accuracy(predictions_test, test)
import numpy as np
from statsmodels.tools.eval_measures import rmse
nrmse = rmse(predictions_test, test)/(np.max(test)-np.min(test))
print('NRMSE: %f'% nrmse)
```
<h2>Predizione del modello compreso l'anno 2020
```
#inizio e fine predizione
start_prediction = ts.index[0]
end_prediction = ts.index[-1]
predictions_tot = model_fit.predict(start=start_prediction, end=end_prediction)
plt.figure(figsize=(12,6))
plt.title('Previsione modello su dati osservati - dal 2015 al 30 settembre 2020', size=20)
plt.plot(ts, color='blue', label='actual')
plt.plot(predictions_tot.iloc[1:], color='red', label='predict')
plt.xticks(rotation=45)
plt.legend(prop={'size': 12})
plt.show()
diff_predictions_tot = (ts - predictions_tot)
plt.figure(figsize=(12,6))
plt.title('Differenza tra i valori osservati e i valori stimati del modello', size=20)
plt.plot(diff_predictions_tot)
plt.show()
diff_predictions_tot['24-02-2020':].sum()
predictions_tot.to_csv('../../csv/pred/predictions_SARIMA_sardegna.csv')
```
<h2>Intervalli di confidenza della previsione totale
```
forecast = model_fit.get_prediction(start=start_prediction, end=end_prediction)
in_c = forecast.conf_int()
print(forecast.predicted_mean)
print(in_c)
print(forecast.predicted_mean - in_c['lower TOTALE'])
plt.plot(in_c)
plt.show()
upper = in_c['upper TOTALE']
lower = in_c['lower TOTALE']
lower.to_csv('../../csv/lower/predictions_SARIMA_sardegna_lower.csv')
upper.to_csv('../../csv/upper/predictions_SARIMA_sardegna_upper.csv')
```
| github_jupyter |
```
##%overwritefile
##%file:src/compile_out_file.py
##%noruncode
def getCompout_filename(self,cflags,outfileflag,defoutfile):
outfile=''
binary_filename=defoutfile
index=0
for s in cflags:
if s.startswith(outfileflag):
if(len(s)>len(outfileflag)):
outfile=s[len(outfileflag):]
del cflags[index]
else:
outfile=cflags[cflags.index(outfileflag)+1]
if outfile.startswith('-'):
outfile=binary_filename
del cflags[cflags.index(outfileflag)+1]
del cflags[cflags.index(outfileflag)]
binary_filename=outfile
index+=1
return binary_filename
##%overwritefile
##%file:src/compile_with_sc.py
##%noruncode
def compile_with_sc(self, source_filename, binary_filename, cflags=None, ldflags=None,env=None,magics=None):
outfile=binary_filename
orig_cflags=cflags
orig_ldflags=ldflags
ccmd=[]
clargs=[]
crargs=[]
outfileflag=[]
oft=''
if len(self.kernel_info['compiler']['outfileflag'])>0:
oft=self.kernel_info['compiler']['outfileflag']
outfileflag=[oft]
binary_filename=self.getCompout_filename(cflags,oft,outfile)
args=[]
if magics!=None and len(self.mymagics.addkey2dict(magics,'ccompiler'))>0:
## use code line ccompiler lable
args = magics['ccompiler'] + orig_cflags +[source_filename] + orig_ldflags
else:
## use kernel default compiler -> kernel_info['compiler']['cmd']
if len(self.kernel_info['compiler']['cmd'])>0:
ccmd+=[self.kernel_info['compiler']['cmd']]
if len(self.kernel_info['compiler']['clargs'])>0:
clargs+=self.kernel_info['compiler']['clargs']
if len(self.kernel_info['compiler']['crargs'])>0:
crargs+=self.kernel_info['compiler']['crargs']
args = ccmd+cflags+[source_filename] +clargs+outfileflag+ [binary_filename]+crargs+ ldflags
# self._log(''.join((' '+ str(s) for s in args))+"\n")
return self.mymagics.create_jupyter_subprocess(args,env=env,magics=magics),binary_filename,args
##%overwritefile
##%file:src/c_exec_sc_.py
##%noruncode
def _exec_sc_(self,source_filename,magics):
self.mymagics._logln('Generating executable file')
with self.mymagics.new_temp_file(suffix=self.kernel_info['execsuffix']) as binary_file:
magics['status']='compiling'
p,outfile,tsccmd = self.compile_with_sc(
source_filename,
binary_file.name,
self.mymagics.get_magicsSvalue(magics,'cflags'),
self.mymagics.get_magicsSvalue(magics,'ldflags'),
self.mymagics.get_magicsbykey(magics,'env'),
magics=magics)
returncode=p.wait_end(magics)
p.write_contents()
magics['status']=''
binary_file.name=os.path.join(os.path.abspath(''),outfile)
if returncode != 0:
## Compilation failed
self.mymagics._logln(' '.join((str(s) for s in tsccmd))+"\n",3)
self.mymagics._logln("compiler exited with code {}, the executable will not be executed".format(returncode),3)
## delete source files before exit
## os.remove(source_filename)
os.remove(binary_file.name)
return p.returncode,binary_file.name
```
| github_jupyter |
# Sentiment analysis with support vector machines
In this notebook, we will revisit a learning task that we encountered earlier in the course: predicting the *sentiment* (positive or negative) of a single sentence taken from a review of a movie, restaurant, or product. The data set consists of 3000 labeled sentences, which we divide into a training set of size 2500 and a test set of size 500. Previously we found a logistic regression classifier. Today we will use a support vector machine.
Before starting on this notebook, make sure the folder `sentiment_labelled_sentences` (containing the data file `full_set.txt`) is in the same directory. Recall that the data can be downloaded from https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences.
## 1. Loading and preprocessing the data
Here we follow exactly the same steps as we did earlier.
```
%matplotlib inline
import string
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
from sklearn.feature_extraction.text import CountVectorizer
## Read in the data set.
with open("sentiment_labelled_sentences/full_set.txt") as f:
content = f.readlines()
## Remove leading and trailing white space
content = [x.strip() for x in content]
## Separate the sentences from the labels
sentences = [x.split("\t")[0] for x in content]
labels = [x.split("\t")[1] for x in content]
## Transform the labels from '0 v.s. 1' to '-1 v.s. 1'
y = np.array(labels, dtype='int8')
y = 2*y - 1
## full_remove takes a string x and a list of characters removal_list
## returns x with all the characters in removal_list replaced by ' '
def full_remove(x, removal_list):
for w in removal_list:
x = x.replace(w, ' ')
return x
## Remove digits
digits = [str(x) for x in range(10)]
digit_less = [full_remove(x, digits) for x in sentences]
## Remove punctuation
punc_less = [full_remove(x, list(string.punctuation)) for x in digit_less]
## Make everything lower-case
sents_lower = [x.lower() for x in punc_less]
## Define our stop words
stop_set = set(['the', 'a', 'an', 'i', 'he', 'she', 'they', 'to', 'of', 'it', 'from'])
## Remove stop words
sents_split = [x.split() for x in sents_lower]
sents_processed = [" ".join(list(filter(lambda a: a not in stop_set, x))) for x in sents_split]
## Transform to bag of words representation.
vectorizer = CountVectorizer(analyzer = "word", tokenizer = None, preprocessor = None, stop_words = None, max_features = 4500)
data_features = vectorizer.fit_transform(sents_processed)
## Append '1' to the end of each vector.
data_mat = data_features.toarray()
## Split the data into testing and training sets
np.random.seed(0)
test_inds = np.append(np.random.choice((np.where(y==-1))[0], 250, replace=False), np.random.choice((np.where(y==1))[0], 250, replace=False))
train_inds = list(set(range(len(labels))) - set(test_inds))
train_data = data_mat[train_inds,]
train_labels = y[train_inds]
test_data = data_mat[test_inds,]
test_labels = y[test_inds]
print("train data: ", train_data.shape)
print("test data: ", test_data.shape)
```
## 2. Fitting a support vector machine to the data
In support vector machines, we are given a set of examples $(x_1, y_1), \ldots, (x_n, y_n)$ and we want to find a weight vector $w \in \mathbb{R}^d$ that solves the following optimization problem:
$$ \min_{w \in \mathbb{R}^d} \| w \|^2 + C \sum_{i=1}^n \xi_i $$
$$ \text{subject to } y_i \langle w, x_i \rangle \geq 1 - \xi_i \text{ for all } i=1,\ldots, n$$
`scikit-learn` provides an SVM solver that we will use. The following routine takes as input the constant `C` (from the above optimization problem) and returns the training and test error of the resulting SVM model. It is invoked as follows:
* `training_error, test_error = fit_classifier(C)`
The default value for parameter `C` is 1.0.
```
from sklearn import svm
def fit_classifier(C_value=1.0):
clf = svm.LinearSVC(C=C_value, loss='hinge')
clf.fit(train_data,train_labels)
## Get predictions on training data
train_preds = clf.predict(train_data)
train_error = float(np.sum((train_preds > 0.0) != (train_labels > 0.0)))/len(train_labels)
## Get predictions on test data
test_preds = clf.predict(test_data)
test_error = float(np.sum((test_preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
##
return train_error, test_error
cvals = [0.01,0.1,1.0,10.0,100.0,1000.0,10000.0]
for c in cvals:
train_error, test_error = fit_classifier(c)
print ("Error rate for C = %0.2f: train %0.3f test %0.3f" % (c, train_error, test_error))
```
## 3. Evaluating C by k-fold cross-validation
As we can see, the choice of `C` has a very significant effect on the performance of the SVM classifier. We were able to assess this because we have a separate test set. In general, however, this is a luxury we won't possess. How can we choose `C` based only on the training set?
A reasonable way to estimate the error associated with a specific value of `C` is by **`k-fold cross validation`**:
* Partition the training set `S` into `k` equal-sized sized subsets `S_1, S_2, ..., S_k`.
* For `i=1,2,...,k`, train a classifier with parameter `C` on `S - S_i` (all the training data except `S_i`) and test it on `S_i` to get error estimate `e_i`.
* Average the errors: `(e_1 + ... + e_k)/k`
The following procedure, **cross_validation_error**, does exactly this. It takes as input:
* the training set `x,y`
* the value of `C` to be evaluated
* the integer `k`
and it returns the estimated error of the classifier for that particular setting of `C`. <font color="magenta">Look over the code carefully to understand exactly what it is doing.</font>
```
def cross_validation_error(x,y,C_value,k):
n = len(y)
## Randomly shuffle indices
indices = np.random.permutation(n)
## Initialize error
err = 0.0
## Iterate over partitions
for i in range(k):
## Partition indices
test_indices = indices[int(i*(n/k)):int((i+1)*(n/k) - 1)]
train_indices = np.setdiff1d(indices, test_indices)
## Train classifier with parameter c
clf = svm.LinearSVC(C=C_value, loss='hinge')
clf.fit(x[train_indices], y[train_indices])
## Get predictions on test partition
preds = clf.predict(x[test_indices])
## Compute error
err += float(np.sum((preds > 0.0) != (y[test_indices] > 0.0)))/len(test_indices)
return err/k
```
## 4. Picking a value of C
The procedure **cross_validation_error** (above) evaluates a single candidate value of `C`. We need to use it repeatedly to identify a good `C`.
<font color="magenta">**For you to do:**</font> Write a function to choose `C`. It will be invoked as follows:
* `c, err = choose_parameter(x,y,k)`
where
* `x,y` is the training data
* `k` is the number of folds of cross-validation
* `c` is chosen value of the parameter `C`
* `err` is the cross-validation error estimate at `c`
<font color="magenta">Note:</font> This is a tricky business because a priori, even the order of magnitude of `C` is unknown. Should it be 0.0001 or 10000? You might want to think about trying multiple values that are arranged in a geometric progression (such as powers of ten). *In addition to returning a specific value of `C`, your function should **plot** the cross-validation errors for all the values of `C` it tried out (possibly using a log-scale for the `C`-axis).*
```
def choose_parameter(x,y,k):
C = [0.0001,0.001,0.01,0.1,1,10,100,1000,10000]
err=[]
for c in C:
err.append(cross_validation_error(x,y,c,k))
err_min,cc=min(list(zip(err,C))) #C value for minimum error
plt.plot(np.log(C),err)
plt.xlabel("Log(C)")
plt.ylabel("Corresponding error")
return cc,err_min
```
Now let's try out your routine!
```
c, err = choose_parameter(train_data, train_labels, 10)
print("Choice of C: ", c)
print("Cross-validation error estimate: ", err)
## Train it and test it
clf = svm.LinearSVC(C=c, loss='hinge')
clf.fit(train_data, train_labels)
preds = clf.predict(test_data)
error = float(np.sum((preds > 0.0) != (test_labels > 0.0)))/len(test_labels)
print("Test error: ", error)
```
<font color="magenta">**For you to ponder:**</font> How does the plot of cross-validation errors for different `C` look? Is there clearly a trough in which the returned value of `C` falls? Does the plot provide some reassurance that the choice is reasonable?
U-shaped. Yes. Yes.
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:cores-oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "CORES_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Logistic Regression on 'HEART DISEASE' Dataset
Elif Cansu YILDIZ
```
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql.functions import col, countDistinct
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, MinMaxScaler, IndexToString
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator
spark = SparkSession\
.builder\
.appName("MachineLearningExample")\
.getOrCreate()
```
The dataset used is 'Heart Disease' dataset from Kaggle. You can get from this [link](https://www.kaggle.com/ronitf/heart-disease-uci).
```
df = spark.read.csv('datasets/heart.csv', header = True, inferSchema = True) #Kaggle Dataset
df.printSchema()
df.show(5)
```
__HOW MANY DISTINCT VALUE DO COLUMNS HAVE?__
```
df.agg(*(countDistinct(col(c)).alias(c) for c in df.columns)).show()
```
__SET the Label Column and Input Columns__
```
labelColumn = "thal"
input_columns = [t[0] for t in df.dtypes if t[0]!=labelColumn]
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = df.randomSplit([0.7, 0.3])
print("total data count: ", df.count())
print("train data count: ", trainingData.count())
print("test data count: ", testData.count())
```
__TRAINING__
```
assembler = VectorAssembler(inputCols = input_columns, outputCol='features')
lr = LogisticRegression(featuresCol='features', labelCol=labelColumn,
maxIter=10, regParam=0.3, elasticNetParam=0.8)
stages = [assembler, lr]
partialPipeline = Pipeline().setStages(stages)
model = partialPipeline.fit(trainingData)
```
__MAKE PREDICTIONS__
```
predictions = model.transform(testData)
predictionss = predictions.select("probability", "rawPrediction", "prediction",
col(labelColumn).alias("label"))
predictionss[["probability", "prediction", "label"]].show(5, truncate=False)
```
__EVALUATION for Binary Classification__
```
evaluator = BinaryClassificationEvaluator(labelCol="label", rawPredictionCol="prediction", metricName="areaUnderROC")
areaUnderROC = evaluator.evaluate(predictionss)
print("Area under ROC = %g" % areaUnderROC)
evaluator = BinaryClassificationEvaluator(labelCol="label", rawPredictionCol="prediction", metricName="areaUnderPR")
areaUnderPR = evaluator.evaluate(predictionss)
print("areaUnderPR = %g" % areaUnderPR)
```
__EVALUATION for Multiclass Classification__
```
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictionss)
print("accuracy = %g" % accuracy)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="f1")
f1 = evaluator.evaluate(predictionss)
print("f1 = %g" % f1)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedPrecision")
weightedPrecision = evaluator.evaluate(predictionss)
print("weightedPrecision = %g" % weightedPrecision)
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="weightedRecall")
weightedRecall = evaluator.evaluate(predictionss)
print("weightedRecall = %g" % weightedRecall)
```
| github_jupyter |
# 一个完整的机器学习项目
```
import os
import tarfile
import urllib
import pandas as pd
import numpy as np
from CategoricalEncoder import CategoricalEncoder
```
# 下载数据集
```
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "../datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if os.path.isfile(housing_path + "/housing.tgz"):
return print("already download")
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
```
# 加载数据集
```
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing_data = load_housing_data()
housing_data.head()
housing_data.info()
housing_data["ocean_proximity"].value_counts()
housing_data.describe()
```
# 绘图
```
%matplotlib inline
import matplotlib.pyplot as plt
housing_data.hist(bins=50, figsize=(20, 15))
```
# 创建测试集
```
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing_data, test_size=0.2, random_state=42)
housing = train_set.copy()
housing.plot(kind="scatter" , x="longitude", y="latitude", alpha= 0.3, s=housing[ "population" ]/100, label= "population", c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True)
```
## 皮尔逊相关系数
因为数据集并不是非常大,你以很容易地使用 `corr()` 方法计算出每对属性间的标准相关系数(standard correlation coefficient,也称作皮尔逊相关系数。
相关系数的范围是 -1 到 1。当接近 1 时,意味强正相关;例如,当收入中位数增加时,房价中位数也会增加。当相关系数接近 -1 时,意味强负相关;你可以看到,纬度和房价中位数有轻微的负相关性(即,越往北,房价越可能降低)。最后,相关系数接近 0,意味没有线性相关性。
> 相关系数可能会完全忽略非线性关系
```
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
```
## 创建一些新的特征
```
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
housing["population_per_household"] = housing["population"] / housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
```
# 为机器学习准备数据
所有的数据处理 __只能在训练集上进行__,不能使用测试集数据。
```
housing = train_set.drop("median_house_value", axis=1)
housing_labels = train_set["median_house_value"].copy()
```
## 数据清洗
大多机器学习算法不能处理缺失的特征,因此先创建一些函数来处理特征缺失的问题。
前面,你应该注意到了属性 total_bedrooms 有一些缺失值。有三个解决选项:
* 去掉对应的街区;
* 去掉整个属性;
* 进行赋值(0、平均值、中位数等等)。
用 DataFrame 的 `dropna()`,`drop()`,和 `fillna()` 方法,可以方便地实现:
```python
housing.dropna(subset=["total_bedrooms"]) # 选项1
housing.drop("total_bedrooms", axis= 1) # 选项2
median = housing["total_bedrooms"].median()
housing["total_bedrooms"].fillna(median) # 选项3
```
Scikit-Learn 提供了一个方便的类来处理缺失值: `Imputer`。下面是其使用方法:首先,需要创建一个 `Imputer` 实例,指定用某属性的中位数来替换该属性所有的缺失值:
```python
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
# 因为只有数值属性才能算出中位数,所以需要创建一份不包括文本属性 ocean_proximity 的数据副本:
housing_num = housing.drop("ocean_proximity", axis=1)
# 用 fit() 方法将 imputer 实例拟合到训练数据:
imputer.fit(housing_num)
# 使用这个“训练过的” imputer 来对训练集进行转换,将缺失值替换为中位数:
X = imputer.transform(housing_num)
```
```
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.info()
```
## 处理文本和类别属性
前面,我们丢弃了类别属性 ocean_proximity,因为它是一个文本属性,不能计算出中位数。__大多数机器学习算法跟喜欢和数字打交道,所以让我们把这些文本标签转换为数字__。
### LabelEncoder
Scikit-Learn 为这个任务提供了一个转换器 `LabelEncoder`
```
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing["ocean_proximity"]
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
encoder.classes_ # <1H OCEAN 被映射为 0, INLAND 被映射为 1 等等
```
### OneHotEncoder
注意输出结果是一个 SciPy 稀疏矩阵,而不是 NumPy 数组。
> 当类别属性有数千个分类时,这样非常有用。经过独热编码,我们得到了一个有数千列的矩阵,这个矩阵每行只有一个 1,其余都是 0。使用大量内存来存储这些 0 非常浪费,所以稀疏矩阵只存储非零元素的位置。你可以像一个 2D 数据那样进行使用,但是如果你真的想将其转变成一个(密集的)NumPy 数组,只需调用 `toarray()` 方法。
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape( -1 , 1 ))
housing_cat_1hot
housing_cat_1hot.toarray()
```
### LabelBinarizer
使用类 LabelBinarizer ,我们可以用一步执行这两个转换。
> 向构造器 `LabelBinarizer` 传递 `sparse_output=True`,就可以得到一个稀疏矩阵。
```
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
```
## 自定义转换器
尽管 Scikit-Learn 提供了许多有用的转换器,你还是需要自己动手写转换器执行任务,比如自定义的清理操作,或属性组合。你需要让自制的转换器与 Scikit-Learn 组件(比如流水线)无缝衔接工作,因为 Scikit-Learn 是依赖鸭子类型的(而不是继承),你所需要做的是创建一个类并执行三个方法: `fit()`(返回 `self` ),`transform()` ,和 `fit_transform()`。
通过添加 `TransformerMixin` 作为基类,可以很容易地得到最后一个。另外,如果你添加 `BaseEstimator` 作为基类(且构造器中避免使用 `*args` 和 `**kargs`),就能得到两个额外的方法(`get_params()` 和 `set_params()`),二者可以方便地进行超参数自动微调。
```
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__ (self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
```
## 特征缩放
有两种常见的方法可以让所有的属性有相同的量度:线性函数归一化(Min-Max scaling)和标准化(standardization)。
1. 线性函数归一化(许多人称其为归一化(normalization))很简单:值被转变、重新缩放,直到范围变成 0 到 1。我们通过减去最小值,然后再除以最大值与最小值的差值,来进行归一化。
> Scikit-Learn 提供了一个转换器 `MinMaxScaler` 来实现这个功能。它有一个超参数 `feature_range`,可以让你改变范围,如果不希望范围是 0 到 1。
2. 标准化:首先减去平均值(所以标准化值的平均值总是 0),然后除以方差,使得到的分布具有单位方差。标准化受到异常值的影响很小。例如,假设一个街区的收入中位数由于某种错误变成了100,归一化会将其它范围是 0 到 15 的值变为 `0-0.15`,但是标准化不会受什么影响。
> Scikit-Learn 提供了一个转换器 `StandardScaler` 来进行标准化。
## 转换流水线
因为存在许多数据转换步骤,需要按一定的顺序执行。所以,Scikit-Learn 提供了类 `Pipeline`,来进行这一系列的转换。
Pipeline 构造器需要一个定义步骤顺序的名字/估计器对的列表。__除了最后一个估计器,其余都要是转换器__(即,它们都要有 `fit_transform()` 方法)。
当调用流水线的 `fit()` 方法,就会对所有转换器顺序调用 `fit_transform()` 方法,将每次调用的输出作为参数传递给下一个调用,一直到最后一个估计器,它只执行 `fit()` 方法。
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
```
现在就有了一个对数值的流水线,还需要对分类值应用 `LabelBinarizer`:如何将这些转换写成一个流水线呢?
Scikit-Learn 提供了一个类 `FeatureUnion` 实现这个功能。你给它一列转换器(可以是所有的转换器),当调用它的 `transform()` 方法,每个转换器的 `transform()` 会被 __并行执行__,等待输出,然后将输出合并起来,并返回结果(当然,调用它的 `fit()` 方法就会调用每个转换器的 `fit()`)。
```
from sklearn.pipeline import FeatureUnion
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
# ('label_binarizer', LabelBinarizer()),
('label_binarizer', CategoricalEncoder()),
])
full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
# d = DataFrameSelector(num_attribs)
# housing_d = d.fit_transform(housing)
# imputer = SimpleImputer(strategy="median")
# housing_i = imputer.fit_transform(housing_d)
# c = CombinedAttributesAdder()
# housing_c = c.fit_transform(housing_i)
# s = StandardScaler()
# housing_s = s.fit_transform(housing_c)
# d = DataFrameSelector(cat_attribs)
# housing_d = d.fit_transform(housing)
# l = LabelBinarizer()
# housing_l = l.fit_transform(housing_d)
housing_prepared.toarray()
```
# 选择并训练模型
## 线性回归
```
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
```
完毕!你现在就有了一个可用的线性回归模型。用一些训练集中的实例做下验证:
```
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:\t", lin_reg.predict(some_data_prepared))
print("Labels:\t\t", list(some_labels))
```
## RMSE
使用 Scikit-Learn 的 `mean_squared_error` 函数,用全部训练集来计算下这个回归模型的 RMSE:
```
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
```
尝试一个更为复杂的模型。
## DecisionTreeRegressor
```
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
```
RMSE 评估
```
housing_predictions = tree_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
```
可以发现该模型严重过拟合
## 交叉验证
评估模型的一种方法是用函数 `train_test_split` 来分割训练集,得到一个更小的训练集和一个 __交叉验证集__,然后用更小的训练集来训练模型,用验证集来评估。
另一种更好的方法是 __使用 Scikit-Learn 的交叉验证功能__。
下面的代码采用了 K 折交叉验证(K-fold cross-validation):它随机地将训练集分成十个不同的子集,成为“折”,然后训练评估决策树模型 10 次,每次选一个不用的折来做评估,用其它 9 个来做训练。结果是一个包含 10 个评分的数组
> Scikit-Learn 交叉验证功能期望的是效用函数(越大越好)而不是损失函数(越低越好),因此得分函数实际上与 MSE 相反(即负值),所以在计算平方根之前先计算 -scores 。
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(rmse_scores)
```
## RandomForestRegressor
随机森林是通过用特征的随机子集训练许多决策树。在其它多个模型之上建立模型称为集成学习(Ensemble Learning),它是推进 ML 算法的一种好方法。
```
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10, n_jobs=-1)
rmse_scores = np.sqrt(-scores)
display_scores(rmse_scores)
```
# 保存模型
可以使用python自带的 pickle 或 下述函数
```python
from sklearn.externals import joblib
joblib.dump(my_model, "my_model.pkl")
# load
my_model_loaded = joblib.load("my_model.pkl")
```
# 模型微调
假设现在有了一个列表,列表里有几个有希望的模型。现在需要对它们进行微调。
## 网格搜索
微调的一种方法是手工调整超参数,直到找到一个好的超参数组合。这么做的话会非常冗长,你也可能没有时间探索多种组合。
应该使用 Scikit-Learn 的 `GridSearchCV` 来做这项搜索工作。你所需要做的是告诉 `GridSearchCV` 要试验有哪些超参数,要试验什么值,`GridSearchCV` 就能用交叉验证试验所有可能超参数值的组合。
例如,下面的代码搜索了 `RandomForestRegressor` 超参数值的最佳组合:
```
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1)
grid_search.fit(housing_prepared, housing_labels)
```
`param_grid` 告诉 Scikit-Learn 首先评估所有的列在第一个 `dict` 中的 `n_estimators` 和 `max_features` 的 `3 × 4 = 12` 种组合。然后尝试第二个 `dict` 中超参数的 `2 × 3 = 6` 种组合,这次会将超参数 `bootstrap` 设为 `False`。
总之,网格搜索会探索 `12 + 6 = 18` 种 `RandomForestRegressor` 的超参数组合,会训练每个模型五次(因为用的是五折交叉验证)。换句话说,训练总共有 `18 × 5 = 90` 轮!K 折将要花费大量时间,完成后,你就能获得参数的最佳组合,如下所示:
```
grid_search.best_params_ # 参数最佳组合
grid_search.best_estimator_ # 最佳估计器
```
可以像超参数一样处理数据准备的步骤。例如,__网格搜索可以自动判断是否添加一个你不确定的特征__(比如,使用转换器 `CombinedAttributesAdder` 的超参数 `add_bedrooms_per_room`)。它还能用相似的方法来自动找到处理异常值、缺失特征、特征选择等任务的最佳方法。
## 随机搜索
当探索相对较少的组合时,网格搜索还可以。但是当超参数的搜索空间很大时,最好使用 `RandomizedSearchCV`。这个类的使用方法和类`GridSearchCV` 很相似,但它不是尝试所有可能的组合,而是通过选择每个超参数的一个随机值的特定数量的随机组合。
这个方法有两个优点:
* 如果你让随机搜索运行,比如 1000 次,它会探索每个超参数的 1000 个不同的值(而不是像网格搜索那样,只搜索每个超参数的几个值)。
* 可以方便地通过设定搜索次数,控制超参数搜索的计算量。
# 集成方法
另一种微调系统的方法是将表现最好的模型组合起来。组合(集成)之后的性能通常要比单独的模型要好(就像随机森林要比单独的决策树要好),特别是当单独模型的误差类型不同时。
# 分析最佳模型和它们的误差
通过分析最佳模型,常常可以获得对问题更深的了解。
# 用测试集评估系统
调节完系统之后,终于有了一个性能足够好的系统。现在就可以用测试集评估最后的模型了。
__注意:在测试集上如果模型效果不是很好,一定不要调参,因为这样也无法泛化__
```
final_model = grid_search.best_estimator_
X_test = test_set.drop("median_house_value", axis=1)
y_test = test_set["median_house_value"].copy()
# 清洗数据
X_test_prepared = full_pipeline.transform(X_test)
# 预测
final_predictions = final_model.predict(X_test_prepared)
# RMSE
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
```
| github_jupyter |
# Recommending Movies: Retrieval
Real-world recommender systems are often composed of two stages:
1. The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.
2. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.
In this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our [ranking](basic_ranking) tutorial.
Retrieval models are often composed of two sub-models:
1. A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.
2. A candidate model computing the candidate representation (an equally-sized vector) using the candidate features
The outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.
In this tutorial, we're going to build and train such a two-tower model using the Movielens dataset.
We're going to:
1. Get our data and split it into a training and test set.
2. Implement a retrieval model.
3. Fit and evaluate it.
4. Export it for efficient serving by building an approximate nearest neighbours (ANN) index.
## The dataset
The Movielens dataset is a classic dataset from the [GroupLens](https://grouplens.org/datasets/movielens/) research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.
The data can be treated in two ways:
1. It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.
2. It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.
In this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example.
## Imports
Let's first get our imports out of the way.
```
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
```
## Preparing the dataset
Let's first have a look at the data.
We use the MovieLens dataset from [Tensorflow Datasets](https://www.tensorflow.org/datasets). Loading `movie_lens/100k_ratings` yields a `tf.data.Dataset` object containing the ratings data and loading `movie_lens/100k_movies` yields a `tf.data.Dataset` object containing only the movies data.
Note that since the MovieLens dataset does not have predefined splits, all data are under `train` split.
```
# Ratings data.
ratings = tfds.load("movie_lens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movie_lens/100k-movies", split="train")
```
The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:
```
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
```
The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.
```
for x in movies.take(1).as_numpy_iterator():
pprint.pprint(x)
```
In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality.
We keep only the `user_id`, and `movie_title` fields in the dataset.
```
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])
```
To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.
In this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.
```
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
train = shuffled.take(80_000)
test = shuffled.skip(80_000).take(20_000)
```
Let's also figure out unique user ids and movie titles present in the data.
This is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.
```
movie_titles = movies.batch(1_000)
user_ids = ratings.batch(1_000_000).map(lambda x: x["user_id"])
unique_movie_titles = np.unique(np.concatenate(list(movie_titles)))
unique_user_ids = np.unique(np.concatenate(list(user_ids)))
unique_movie_titles[:10]
```
## Implementing a model
Choosing the architecure of our model a key part of modelling.
Because we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model.
### The query tower
Let's start with the query tower.
The first step is to decide on the dimensionality of the query and candidate representations:
```
embedding_dimension = 32
```
Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.
The second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an `Embedding` layer. Note that we use the list of unique user ids we computed earlier as a vocabulary:
# _Note: Requires TF 2.3.0_
```
user_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)
])
```
A simple model like this corresponds exactly to a classic [matrix factorization](https://ieeexplore.ieee.org/abstract/document/4781121) approach. While defining a subclass of `tf.keras.Model` for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an `embedding_dimension`-wide output at the end.
### The candidate tower
We can do the same with the candidate tower.
```
movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)
])
```
### Metrics
In our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.
To do this, we can use the `tfrs.metrics.FactorizedTopK` metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.
In our case, that's the `movies` dataset, converted into embeddings via our movie model:
```
metrics = tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(movie_model)
)
```
### Loss
The next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.
In this instance, we'll make use of the `Retrieval` task object: a convenience wrapper that bundles together the loss function and metric computation:
```
task = tfrs.tasks.Retrieval(
metrics=metrics
)
```
The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop.
### The full model
We can now put it all together into a model. TFRS exposes a base model class (`tfrs.models.Model`) which streamlines bulding models: all we need to do is to set up the components in the `__init__` method, and implement the `compute_loss` method, taking in the raw features and returning a loss value.
The base model will then take care of creating the appropriate training loop to fit our model.
```
class MovielensModel(tfrs.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# We pick out the user features and pass them into the user model.
user_embeddings = self.user_model(features["user_id"])
# And pick out the movie features and pass them into the movie model,
# getting embeddings back.
positive_movie_embeddings = self.movie_model(features["movie_title"])
# The task computes the loss and the metrics.
return self.task(user_embeddings, positive_movie_embeddings)
```
The `tfrs.Model` base class is a simply convenience class: it allows us to compute both training and test losses using the same method.
Under the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from `tf.keras.Model` and overriding the `train_step` and `test_step` functions (see [the guide](https://keras.io/guides/customizing_what_happens_in_fit/) for details):
```
class NoBaseClassMovielensModel(tf.keras.Model):
def __init__(self, user_model, movie_model):
super().__init__()
self.movie_model: tf.keras.Model = movie_model
self.user_model: tf.keras.Model = user_model
self.task: tf.keras.layers.Layer = task
def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Set up a gradient tape to record gradients.
with tf.GradientTape() as tape:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
gradients = tape.gradient(total_loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:
# Loss computation.
user_embeddings = self.user_model(features["user_id"])
positive_movie_embeddings = self.movie_model(features["movie_title"])
loss = self.task(user_embeddings, positive_movie_embeddings)
# Handle regularization losses as well.
regularization_loss = sum(self.losses)
total_loss = loss + regularization_loss
metrics = {metric.name: metric.result() for metric in self.metrics}
metrics["loss"] = loss
metrics["regularization_loss"] = regularization_loss
metrics["total_loss"] = total_loss
return metrics
```
In these tutorials, however, we stick to using the `tfrs.Model` base class to keep our focus on modelling and abstract away some of the boilerplate.
## Fitting and evaluating
After defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.
Let's first instantiate the model.
```
model = MovielensModel(user_model, movie_model)
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
```
Then shuffle, batch, and cache the training and evaluation data.
```
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
```
Then train the model:
```
model.fit(cached_train, epochs=3)
```
As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.
Note that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation.
Finally, we can evaluate our model on the test set:
```
model.evaluate(cached_test, return_dict=True)
```
Test set performance is much worse than training performance. This is due to two factors:
1. Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.
2. The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.
The second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item).
## Making predictions
Now that we have a model, we would like to be able to make predictions. We can use the `tfrs.layers.ann.BruteForce` layer to do this.
```
# Create a model that takes in raw query features, and
index = tfrs.layers.ann.BruteForce(model.user_model)
# recommends movies out of the entire movies dataset.
index.index(movies.batch(100).map(model.movie_model), movies)
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
```
Of course, the `BruteForce` layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index.
## Model serving
After the model is trained, we need a way to deploy it.
In a two-tower retrieval model, serving has two components:
- a serving query model, taking in features of the query and transforming them into a query embedding, and
- a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model.
### Exporting a query model to serving
Exporting the query model is easy: we can either serialize the Keras model directly, or export it to a `SavedModel` format to make it possible to serve using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving).
To export to a `SavedModel` format, we can do the following:
```
model_dir = './models'
!mkdir $model_dir
# Export the query model.
path = '{}/query_model'.format(model_dir)
model.user_model.save(path)
# Load the query model
loaded = tf.keras.models.load_model(path, compile=False)
query_embedding = loaded(tf.constant(["10"]))
print(f"Query embedding: {query_embedding[0, :3]}")
```
### Building a candidate ANN index
Exporting candidate representations is more involved. Firstly, we want to pre-compute them to make sure serving is fast; this is especially important if the candidate model is computationally intensive (for example, if it has many or wide layers; or uses complex representations for text or images). Secondly, we would like to take the precomputed representations and use them to construct a fast approximate retrieval index.
We can use [Annoy](https://github.com/spotify/annoy) to build such an index.
Annoy isn't included in the base TFRS package. To install it, run:
### We can now create the index object.
```
from annoy import AnnoyIndex
index = AnnoyIndex(embedding_dimension, "dot")
```
Then take the candidate dataset and transform its raw features into embeddings using the movie model:
```
print(movies)
movie_embeddings = movies.enumerate().map(lambda idx, title: (idx, title, model.movie_model(title)))
print(movie_embeddings.as_numpy_iterator().next())
```
And then index the movie_id, movie embedding pairs into our Annoy index:
```
%%time
movie_id_to_title = dict((idx, title) for idx, title, _ in movie_embeddings.as_numpy_iterator())
# We unbatch the dataset because Annoy accepts only scalar (id, embedding) pairs.
for movie_id, _, movie_embedding in movie_embeddings.as_numpy_iterator():
index.add_item(movie_id, movie_embedding)
# Build a 10-tree ANN index.
index.build(10)
```
We can then retrieve nearest neighbours:
```
for row in test.batch(1).take(3):
query_embedding = model.user_model(row["user_id"])[0]
candidates = index.get_nns_by_vector(query_embedding, 3)
print(f"User ID: {row['user_id']}, Candidates: {[movie_id_to_title[x] for x in candidates]}.")
print(type(candidates))
```
## Next steps
This concludes the retrieval tutorial.
To expand on what is presented here, have a look at:
1. Learning multi-task models: jointly optimizing for ratings and clicks.
2. Using movie metadata: building a more complex movie model to alleviate cold-start.
| github_jupyter |
## Instructions
Please make a copy and rename it with your name (ex: Proj6_Ilmi_Yoon). All grading points should be explored in the notebook but some can be done in a separate pdf file.
*Graded questions will be listed with "Q:" followed by the corresponding points.*
You will be submitting **a pdf** file containing **the url of your own proj6.**
---
**Hypothesis testing**
===
**Outline**
At the end of this week, you will be a pro at:
- **hypothesis testing**
* is there something interesting/meaningful going on in my data?
- one-sample t-test
- two-sample t-test
- **correcting for multiple testing**
* doing thousands of hypothesis tests at a time will increase your likelihood of incorrect conclusions
* you'll learn how to account for that
- **false discovery rates**
* you could be a perfectionist ("even one wrong conclusion is the worst"), aka family-wise error rate (FWER)
* or become a pragmatic ("of my significant discoveries, i expect x% of them to be false positives."), aka false discovery rate (FDR)
- **permutation tests**
* if your assumptions about your data are wrong, you may over/underestimate your confidence in your conclusions
* assume as little as possible about the data with a permutation test
**Examples**
In class, we will talk about 3 examples:
- confidence intervals
- how much time do Americans spend on average per day on Netflix?
- one-sample t-test
- do Americans spend more time on average per day on Netflix compared to before the pandemic?
- two-sample t-test
- does exercise affect baseline blood pressure?
**Your project**
- RNA sequencing: which genes differentiate the different immune cells in your blood?
- two-sample t-test
- multiple testing correction
**How do you make the best of this week?**
- start seeing all statistics reported around you, and think of how they relate to what we have learned.
- do rigorous statistics in your work from now on
**LET'S BEGIN!**
===============================================================
```
#import python packages
import numpy as np
import scipy as sp
import scipy.stats as st
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
rng=np.random.RandomState(1234) #this will ensure the reproducibility of the notebook
```
**EXAMPLE I:**
===
How much time do subscribers spend on average each day on Netflix?
--
Example discussed in class (Lecture 1). The data we are working with are simulated, but the mean time spent on Netflix is inspired by https://www.pcmag.com/news/us-netflix-subscribers-watch-32-hours-and-use-96-gb-of-data-per-day (average of 3.2 hours for subscribers).
```
#Summarizing data
#================
population=np.array([1,1.8,2,3.2,3.3,4,4,4.2])
our_sample=np.array([2,3.2,4])
#means
population_mean=np.mean(population)
print('Population mean',population_mean.round(2))
sample_mean=np.mean(our_sample)
print('- Sample mean',sample_mean.round(2))
#standard deviations
population_sd=np.std(population)
print('Population standard deviation',population_sd.round(2))
#biased sample sd
biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])
print('- Biased sample standard deviation',biased_sample_sd.round(2))
#unbiased sample sd
unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))
print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))
plt.hist(population,range(0,6),color='black')
plt.yticks([0,1,2])
plt.xlabel('Number of hours spent\nper day on Netflix')
plt.ylabel('Number of observations')
plt.show()
#larger example
MEAN_NETFLIX=3.2
SD_NETFLIX=1
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=1000)
population[population<0]=0
our_sample=population[0:100]
#means
population_mean=np.mean(population)
print('Population mean',population_mean.round(2))
sample_mean=np.mean(our_sample)
print('- Sample mean',sample_mean.round(2))
#standard deviations
population_sd=np.std(population)
print('Population standard deviation',population_sd.round(2))
#biased sample sd
biased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])
print('- Biased sample standard deviation',biased_sample_sd.round(2))
#unbiased sample sd
unbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))
print('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))
#representing sets of datapoints
#===============================
#histograms
plt.hist(population,[x*0.6 for x in range(10)],color='lightgray',edgecolor='black')
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Number of respondents',fontsize=15)
plt.xlim(0,6)
plt.show()
plt.hist(our_sample,[x*0.6 for x in range(10)],color='lightblue',edgecolor='black')
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Number of respondents',fontsize=15)
plt.xlim(0,6)
plt.show()
#densities
sns.distplot(population, hist=True, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4})
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Density',fontsize=15)
plt.xlim(0,6)
plt.show()
sns.distplot(our_sample, hist=True, kde=True,
bins=[x*0.6 for x in range(10)], color = 'blue',
hist_kws={'edgecolor':'black','color':'lightblue'},
kde_kws={'linewidth': 4})
plt.xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plt.ylabel('Density',fontsize=15)
plt.xlim(0,6)
plt.show()
#put both data in the same plot
fig,plots=plt.subplots(1)
sns.distplot(population, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlim(0,6)
sns.distplot(our_sample, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'blue',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plots.set_ylabel('Density',fontsize=15)
x = plots.lines[-1].get_xdata()
y = plots.lines[-1].get_ydata()
plots.fill_between(x, 0, y, where=x < 2, color='lightblue', alpha=0.3)
plt.xlim(0,6)
plt.show()
#put both data in the same plot
fig,plots=plt.subplots(1)
sns.distplot(population, hist=False, kde=True,
bins=[x*0.6 for x in range(10)], color = 'black',
hist_kws={'edgecolor':'black','color':'black'},
kde_kws={'linewidth': 4},ax=plots)
plots.set_xlim(0,6)
x = plots.lines[-1].get_xdata()
y = plots.lines[-1].get_ydata()
plots.fill_between(x, 0, y, where=(x < 4) & (x>2), color='gray', alpha=0.3)
plt.xlim(0,6)
plots.set_xlabel('Number of hours spent on Netflix\nper day',fontsize=15)
plots.set_ylabel('Density',fontsize=15)
plt.show()
np.multiply((population<=4),(population>=2)).sum()/population.shape[0]
#brute force confidence interval
N_POPULATION=10000
N_SAMPLE=1000
population=np.random.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,10)
mean_i=np.mean(sample_i)
sample_means.append(mean_i)
sample_means=np.array(sample_means)
#sd of the mean
means_mean=np.mean(sample_means)
means_sd=np.std(sample_means)
print('Mean of the means',means_mean)
print('SEM (SD of the means)',means_sd)
plt.hist(sample_means,100,color='red')
plt.xlabel('Number of hours spent on Netflix\nper day\nMEANS OF SAMPLES')
plt.xlim(0,6)
plt.axvline(x=means_mean,color='black')
plt.axvline(x=means_mean-means_sd,color='black',linestyle='--')
plt.axvline(x=means_mean+means_sd,color='black',linestyle='--')
plt.show()
#compute what fraction of points are within 1 means_sd from means_mean
within_1sd=0
within_2sd=0
for i in range(sample_means.shape[0]):
m=sample_means[i]
if m>=(means_mean-means_sd) and m<=(means_mean+means_sd):
within_1sd+=1
if m>=(means_mean-2*means_sd) and m<=(means_mean+2*means_sd):
within_2sd+=1
print('within 1 means SD:',within_1sd/sample_means.shape[0])
print('within 1 means SD:',within_2sd/sample_means.shape[0])
from scipy import stats
print('SEM (SD of the means), empirically calculated',means_sd.round(2))
print('SEM computed in python',stats.sem(sample_i).round(2))
#one sample t test in python
from scipy.stats import ttest_1samp
MEAN_NETFLIX=3.2
SD_NETFLIX=1
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=1000)
population[population<0]=0
our_sample=population[0:10]
print(our_sample.round(2))
print(our_sample.mean())
print(our_sample.std())
TEST_VALUE=1.5
t, pvalue = ttest_1samp(our_sample, popmean=TEST_VALUE)
print('t', t.round(2))
print('p-value', pvalue.round(6))
#confidence intervals
#=====================
#take 100 samples
#compute their confidence intervals
#plot them
import scipy.stats as st
N_SAMPLE=200
for CONFIDENCE in [0.9,0.98,0.999999]:
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
ci_lows=[]
ci_highs=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,10)
mean_i=np.mean(sample_i)
ci=st.t.interval(alpha=CONFIDENCE,
df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))
ci_lows.append(ci[0])
ci_highs.append(ci[1])
sample_means.append(mean_i)
data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})
data=data.sort_values(by='mean')
data.index=range(N_SAMPLE)
print(data)
for i in range(N_SAMPLE):
color='gray'
if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:
color='red'
plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)
#plt.scatter(data['mean'],range(N_SAMPLE),color='black')
plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')
plt.xlabel('Mean time spent on Netflix')
plt.ylabel('Sampling iteration')
plt.xlim(0,10)
plt.show()
#confidence intervals
#=====================
#take 100 samples
#compute their confidence intervals
#plot them
import scipy.stats as st
N_SAMPLE=200
for CONFIDENCE in [0.9,0.98,0.999999]:
population=rng.normal(loc=MEAN_NETFLIX,
scale=SD_NETFLIX,
size=N_POPULATION)
population[population<0]=0
sample_means=[]
ci_lows=[]
ci_highs=[]
for i in range(N_SAMPLE):
sample_i=np.random.choice(population,100)
mean_i=np.mean(sample_i)
ci=st.t.interval(alpha=CONFIDENCE,
df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))
ci_lows.append(ci[0])
ci_highs.append(ci[1])
sample_means.append(mean_i)
data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})
data=data.sort_values(by='mean')
data.index=range(N_SAMPLE)
print(data)
for i in range(N_SAMPLE):
color='gray'
if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:
color='red'
plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)
#plt.scatter(data['mean'],range(N_SAMPLE),color='black')
plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')
plt.xlabel('Mean time spent on Netflix')
plt.ylabel('Sampling iteration')
plt.xlim(0,10)
plt.show()
```
**EXAMPLE II:**
===
Is exercise associated with lower baseline blood pressure?
--
We will simulate data with control mean 120 mmHg, treatment mean 116 mmHg and population sd 5 for both conditions.
```
#simulate dataset
#=====================
def sample_condition_values(condition_mean,
condition_var,
condition_N,
condition=''):
condition_values=np.random.normal(loc = condition_mean,
scale=condition_var,
size = condition_N)
data_condition_here=pd.DataFrame({'BP':condition_values,
'condition':condition})
return(data_condition_here)
#=========================================================================
N_per_condition=10
ctrl_mean=120
test_mean=116
v=5
np.random.seed(1)
data_ctrl=sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='couch')
data_test=sample_condition_values(condition_mean=test_mean,
condition_N=N_per_condition,
condition_var=v,
condition='exercise')
data=pd.concat([data_ctrl,data_test],axis=0)
print(data)
#visualize data
#=====================
sns.catplot(x='condition',y='BP',data=data,height=2,aspect=1.5)
plt.ylabel('BP')
plt.show()
sns.catplot(data=data,x='condition',y='BP',
jitter=1,
)
plt.show()
sns.catplot(data=data,x='condition',y='BP',kind='box',)
plt.show()
sns.catplot(data=data,x='condition',y='BP',kind='violin',)
plt.show()
fig,plots=plt.subplots(1)
sns.boxplot(data=data,x='condition',y='BP',
ax=plots,
)
sns.stripplot(data=data,x='condition',y='BP',
jitter=1,
ax=plots,alpha=0.25,
)
plt.show()
```
In our hypothesis test, we ask if these two groups differ significantly from each other. It's a bit hard to say just from looking at the plot.
This is where statistics comes in. It's time to:
*3. Think about how much the data surprise you, given your null model*
We'll convert this step to some math, as follows:
**Step 1. summarize the difference between the groups with a number.**
This is called a **test statistic**
"How to define the test statistic?" you say?
The world is your oyster. You are free to choose anything you wish.
(Later, we'll see that some choices come with nice math, which is why they are typically used. But a test statistic could be anything)
To demonstrate this intuition, let's come up with a very basic test statistic. For example, let's compute the difference between the BP in the 2 groups.
```
mean_ctrl=np.mean(data[data['condition']=='couch']['BP'])
mean_test=np.mean(data[data['condition']=='exercise']['BP'])
test_stat=mean_test-mean_ctrl
print('test statistic =',test_stat)
```
What is this number telling us? Is the BP significantly different between the 2 conditions? It's impossible to say looking at only this number.
We have to ask ourselves, well, what did you expect?
This takes us to the next step.
**ii) think about what the test statistic would be if in reality there were no difference between the 2 groups. It will be a distribution, not just a single number, because you would expect to see some variation in the test statistic whenever you do an experiment, due to sampling noise, and due to variation in the population.**
Here is where the wasteful part comes in. You go and repeat the measurement on 1000 different couch grouos. Then, for each of these, you compute the same test statistic = the difference between the mean in that sample and your original couch group.
```
np.random.seed(1)
data_exp2=sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='control_0')
for i in range(1,1001):
data_exp2=pd.concat([data_exp2,sample_condition_values(condition_mean=ctrl_mean,
condition_N=N_per_condition,
condition_var=v,
condition='control_'+str(i))])
print(data_exp2)
#now, let's plot the distribution of the test statistic under the null hypothesis
#get mean of each control
exp2_means=data_exp2.groupby('condition').mean()
print(exp2_means.head())
null_test_stats=exp2_means-ctrl_mean
plt.hist(np.array(null_test_stats).flatten(),20,color='black')
plt.xlabel('Test statistic')
plt.axvline(x=test_stat,color='red')
null_test_stats
for i in range(null_test_stats.shape[0]):
if null_test_stats['BP'][i] > 4:
print(null_test_stats.index[i], null_test_stats['BP'][i])
for i in range(null_test_stats.shape[0]):
if null_test_stats['BP'][i]<-4:
print(null_test_stats.index[i],null_test_stats['BP'][i])
sns.catplot(data=data_exp2,x='condition',y='BP',order=['control_0',
'control_1','control_2','control_3',
'control_4','control_5',#'control_6',
#'control_7','control_8','control_9','control_10',
'control_179','control_161',],
color='black',#kind='box',
aspect=2,height=2)
x=5
plt.hist(np.array(null_test_stats[1:2]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:3]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:4]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:5]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats[1:6]).flatten(),range(-4,4),color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
plt.ylim(0,5)
plt.show()
#plt.axvline(x=t_stat,color='red')
plt.hist(np.array(null_test_stats).flatten(),20,color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
#plt.ylim(0,5)
plt.show()
plt.hist(np.array(null_test_stats).flatten(),20,color='black',)
plt.xlabel('Test statistic (t)')
plt.xlim(-x,x)
#plt.ylim(0,5)
plt.axvline(x=test_stat,color='red')
plt.show()
```
In black we have the distribution of test statistics we obtained from the 1000 experiments measuring couch participants. In other words, this is the distribution of the test statistic under the null hypothesis.
The red line shows the test statistic from our comparison of exercise group vs with couch group.
**Is our difference in expression significant?**
if the null is true, in other words, if in reality there is no difference between couch and exercise, what is the probability of seeing such an extreme difference between their means (in other words, such an extreme test statistic)?
We can compute this from the plot above. We go to our null distribution, and count how many times we got a more extreme test statistic in our null experiment than the one we got for the couch vs exercise comparison.
```
count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(test_stat)))
print(count_more_extreme,'times we got a more extreme test statistic under the null')
print(count_more_extreme / 1000,'fraction of the time we got a more extreme test statistic under the null')
```
What we computed above is called a **p-value**. Now, this is a very often misunderstood term, so let's think about it deeply.
Deeply.
Deeply.
About what it is, what it is not.
**P-values**
--
To remember what a p-value is, you decide to make a promise to me and more importantly yourself, that from now on, any sentence in which you mention a p-value will start with "if the null were true, ...".
**A p-value IS:**
- if the null were true, the probability of observing something as extreme or more extreme than your test statistic.
- it's the quantification of your "whoa!", given your null hypothesis. More "whoa!" = smaller p-value.
**A p-value is NOT:**
- the probability that the null hypothesis is wrong. We don't know the probability of that. That's sort of up to the universe.
- the probability that the null hypothesis is wrong. This is so important, that it's worth putting it on the list twice.
Why is this distinction so important?
First, because we can be very good at estimating what happens under the null. It's much more challenging to think about other scenarios. For instance, if you needed to make a model for the BP being different between 2 conditions, how different do you expect them to be? Is the average couch group at 120 and the exercise at 110? Or the couch at 125 and exercise at 130? Do you make a model for each option and grow old estimating all possible models?
Second, it's also a matter of being conservative. It's common courtesy to assume the 2 conditions are the same. I expect you to come to me and convince me that it would be REALLY unlikely to observe what we have just seen given the null, to make it worthwhile my time. It would be weird to just assume the BP is different between the 2 conditions and have to prove that they are the same. We'd be swimming in false positives.
**Statistical significance**
Now that we have a p-value, you need to ask yourself where you set a cutoff for something being unlikely enough to be "significant", or worth your attention. Usually, that's 0.05, or 0.01, or 0.1. Yes, essentially it's a somewhat arbitrary small number.
I reiterate: this does not mean that the exercise group is different from the couch group for sure. If you were to do the experiment 1000 times with groups of participants assigned to "couch", in a small subset of your experiments, you'll get a test statistic as or more extreme than the one we found in our experiment comparing. But given that it's unlikely to get this result under the null hypohesis, you call it a significant difference, one that makes you think.
In summary:
- you look at your p-value - and you think about the probability of getting your result under the null, as you need to include these words in any sentence with p-values -
- compare it with your significance threshold
- if it is less than that threshold, you call that difference in expression significant between KO and control.
**Technical note: one-tailed vs two-tailed tests**
*Depending on what you believe would be the possible alternative to your null hypothesis (conveniently called the alternative hypothesis), you may compute the p-value differently.*
*Specifically, in our example above, we computed the p-value by asking:*
- *if the null were true, what is the probability of obtaining a test statistic as extreme or more extreme than the one we've seen. That means we asked whether there were test statistics larger than our test statistic, or lower than minus our test statistic. This is called a two-tailed test, because we looked at both sides (both tails) of the distribution under the null.*
*If your alternative hypothesis were that the treatment specifically decreases baseline blood pressure, you'd compute the p-value differently, as you'd look under the null at only what fraction of the time you've seen a test statistic lower than the one we've seen. This is a one-tailed test.*
*Of course, this is not an invitation to use one-tailed tests to try to get more significant p-values, since by definition the p-values from a one-tailed test will be smaller than those for a two-tailed test. You should define your alternative hypothesis based on deep thought. I personally like to be as conservative as possible, and as such strongly prefer two-tailed tests.*
**Hypothesis testing in a nutshell**
- come up with a **null hypothesis**.
* In our case: the gene does not change in expression.
- collect some data
* yay, we love data!
- define a **test statistic** to measure your quantity of interest.
* here we looked at the difference between means, but as we'll see below, there are more sophisticated ways to go about it.
- figure out the **distribution of the test statistic under the null** hypothesis
* here, we did this by repeating the measurement on the same type of cells 1000 times. Next, we'll learn that under certain conditions we can comoute this distribution analytically, rather than having to do thousands of experiments.
- compute a **p-value**
* that tells you if the null were true, the probability of getting your test statistic or something even more outrageous
- decide if **significant**
* is p-value below a pre-defined threshold
If you deeply understand this, you're on a very good path to understand a LARGE fraction of all statistics you'll find in genomics.
**PART II. EXAMPLE HYPOTHESIS TESTING USING THE T-TEST**
---
Now, let's do a t-test.
```
from scipy.stats import ttest_ind
t_stat,pvalue=ttest_ind(data[data['condition']=='exercise']['BP'],
data[data['condition']=='couch']['BP'],
)
print(t_stat,pvalue)
#as before, compare to the distribution
null_test_stats=[]
for i in range(1000):
current_t,current_pvalue=ttest_ind(data_exp2[data_exp2['condition']=='control_'+str(i)]['BP'],
data_exp2[data_exp2['condition']=='control_0']['BP'],
)
null_test_stats.append(current_t)
plt.hist(np.array(null_test_stats).flatten(),color='black')
plt.xlabel('Test statistic (t)')
plt.axvline(x=t_stat,color='red')
count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(t_stat)))
print(count_more_extreme,'times we got a more extreme test statistic under the null')
print(count_more_extreme/1000,'fraction of the time we got a more extreme test statistic under the null = p-value')
```
Now, the exciting thing is that we didn't have to perform the second experiment to get an empirical distribution of the test statistic under the null. Rather, we were able to estimate it analytically. And indeed, the p-value we obtained from the t-test is similar to the one we got from our big experiment!
Ok, so by now, you should be pros at hypothesis tests.
Remember: decide on the null, compute test statistic, get the distribution of the test statistic under the null, compute a p-value, decide if significant.
There are of course many other types of hypothesis tests that don't look at the difference between groups as we did here. For instace, in GWAS, you want to see if a mutation is enriched in a disease cohort compared to healthy samples, and you do a chi-square test.
Or maybe you have more than 2 conditions. Then you do ANOVA, rather than a t-test.
**PROJECT: EXAMPLE III:**
===
RNA sequencing: which genes are characteristic for different types of immune cells in your body?
--
Motivation
--
Although all cells in our body have the same DNA, they can have wildly different functions. That is because they activate different genes, for example your brain cells turn on genes that lead to production of neurotransmitters while liver cells activate genes encoding enzymes.
Here, you will compare different types of immune cells (e.g. B-cells that make your antibodies, and T-cells which fight infections), and identify which genes are specifically active in each type of cell.
```
#install scanpy
!pip install scanpy
```
RNA sequencing
--
RNA sequencing allows us to quantify the extent to which each gene is active in a sample. When a gene is active, its DNA is transcribed into mRNA and then translated into protein. With RNA sequencing, we are counting how frequent mRNAs for each gene occur in a sample. Genes that are more active will have higher counts, while genes that are not made into mRNA will have 0 counts.
Data
--
The code below will download the data for you, and organize it into a data frame, where:
- every row is a different gene
- every column is a different sample.
- We have 6 samples, 3 of T cells (called "CD4 T cells" and B cells ("B cells").
- every value is the number of reads from each gene in each sample.
- Note: the values have been normalized to be comparable between samples.
```
import scanpy as sc
def prep_data():
adata=sc.datasets.pbmc3k_processed()
counts=pd.DataFrame(np.expm1(adata.raw.X.toarray()),
index=adata.raw.obs_names,
columns=adata.raw.var_names)
#make 3 reps T-cells and 3 reps B-cells
cells_per_bulk=100
celltype='CD4 T cells'
cells=adata.obs_names[adata.obs['louvain']==celltype]
bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],
index=adata.raw.var_names)
for i in range(3):
cells_here=cells[(i*100):((i+1)*100)]
bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))
bulk_t=bulks
celltype='B cells'
cells=adata.obs_names[adata.obs['louvain']==celltype]
bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],
index=adata.raw.var_names)
for i in range(3):
cells_here=cells[(i*100):((i+1)*100)]
bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))
bulks=pd.concat([bulk_t,bulks],axis=1)
bulks=bulks.sort_values(by=bulks.columns[0],ascending=False)
return(bulks)
data=prep_data()
print(data.head())
print("min: ", data.min())
print("max: ", data.max())
```
**Let's explore the dataset**
**(1 pt)** What are the names of the samples?
**(2 pts)** What is the highest recorded value? What is the lowest?
#write code to answer the questions here
1)
Sample names are
- CD4 T cells.rep1, CD4 T cells.rep2, CD4 T cells.rep3,
- B cells.rep1, B cells.rep2, B cells.rep3
2)
- The highest recorded value:
**max: CD4 T cells.rep1 8303.0**
- The lowest recorded value:
**min: CD4 T cells.rep1 0.0**
**Exploring the data**
One gene that is different between our 2 cell types is IL7R.
**(1 pt)** Plot the distribution of the IL7R gene in the 2 conditions. Which cell type (CD4 T cells or B cells) has the higher level of this gene?
**(1 pt)** How many samples do we have for each condition?
4)
# Answers
3)
- CD4 T has a higher level of this gene, it can be seen in the graph plotted
4)
- Three samples for each condition
For CD4 T Cells:
- CD4 T cells rep1
- CD4 T cells rep2
- CD4 T cells rep3
For B Cells:
- B cells rep1
- B cells rep2
- B cells rep3
```
#inspect the data
GENE='IL7R'
long_data=pd.DataFrame({GENE:data.loc[GENE,:],
'condition':[x.split('.')[0] for x in data.columns]})
print(long_data)
sns.catplot(data=long_data,x='condition', y=GENE)
```
**Two-sample t-test for one gene across 2 conditions**
We are now going to check whether the gene IL7R is differentially active in CD4 T cells vs B cells.
**(1 pt)** What is the null hypothesis?
**(1 pt)** Based on your plot of the gene in the two conditions, and the fact that there looks like there might be a difference, what do you expect the sign of the t-statistic to be (CD4 T cells vs B cells)?
We are going to use the function ttest_ind to perform our t-test. You can read about it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html.
**(1 pt)** What is the t-statistic?
**(1 pt)** What is the p-value?
**(1 pt)** Describe in your own words what the p-value means.
**(1 pt)** Is the p-value significant at alpha = 0.05?
# Answers
5)
- The graph is interesting, at first glance it seems like IL7R is not differentially active in CD4 T cells vs B cells
- they are similar
6)
- If the sign is positive, then reject the null hypothesis
---
7)
- t statistic: 9.66
<br>
test statistic tells us how much our sample mean deviates from the null hypothesis mean
<br>
T Statistic is the value calculated when you replace the population std with the sd(sample standard)
8)
- p-value: 0.00064<br>
9)
- the p valueis the porbability / likelihood of the null hypothesis, since it was rejected it should be low. the smaller the p value, the more "wrong" the null hypothesis it is
10)
- p value != alpha
- 0.00064 != 0.05
- null hypothesis rejected because of that, therefore making the P value significant
```
#pick 1 gene, do 1 t-test
GENE='IL7R'
COND1=['CD4 T cells.rep' + str(x+1) for x in range(3)]
COND2=['B cells.rep' + str(x+1) for x in range(3)]
#plot gene across samples
#t-test
from scipy.stats import ttest_ind
t_stat,pvalue=ttest_ind(data.loc[GENE,COND1],data.loc[GENE,COND2])
print('t statistic',t_stat.round(2))
print('p-value',pvalue.round(5))
```
**Two-sample t-tests for each gene across 2 conditions**
We are now going to repeat our analysis from before for all genes in our dataset.
**(1 pt)** How many genes are present in our dataset?
#Answers
11)
- 13714 genes present in the dataset, displayed with display(results)
```
from IPython.display import display
#all genes t-tests
PSEUDOCOUNT=1
results=pd.DataFrame(index=data.index,
columns=['t','p','lfc'])
for gene in data.index:
t_stat,pvalue=ttest_ind(data.loc[gene,COND1],data.loc[gene,COND2])
lfc=np.log2((data.loc[gene,COND1].mean()+PSEUDOCOUNT)/(data.loc[gene,COND2].mean()+PSEUDOCOUNT))
results.loc[gene,'t']=t_stat
results.loc[gene,'p']=pvalue
results.loc[gene,'lfc']=lfc
```
**Ranking discoveries by either significance or fold change**
For each gene, we have obtained:
- a t-statistic
- a p-value for the difference between the 2 conditions
- a log2 fold change between CD4 T cells and B cells
We can inspect how fold changes relate to the significance of the differences.
**(1 pt)** What do you expect the relationship to be between significance/p-values and fold changes?
#Answers
12) Fold change has a correlation p value, the bigger the fold change from 0 the bigger p value
```
#volcano plot
######
results['p']=results['p'].fillna(1)
PS2=1e-7
plt.scatter(results['lfc'],-np.log10(results['p']+PS2),s=5,alpha=0.5,color='black')
plt.xlabel('Log2 fold change (CD4 T cells/B cells)')
plt.ylabel('-log10(p-value)')
plt.show()
display(results)
```
**Multiple testing correction**
Now, we will explore how the number of differentially active genes differs depending on how we correct for multiple tests.
**(1 pt)** How many genes pass the significance level of 0.05, without performing any correction for multiple testing?
#Answers
13)
- there are 1607 genes that pass the significance level of 0.05
```
ALPHA=0.05
print((results['p']<=ALPHA).sum())
```
We will use a function that adjusts our p-values using different methods, called "multipletests". You can read about it here: https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.html
We will use the following settings:
- for Bonferroni correction, we set method='bonferroni'. This will multiply our p-values by the number of tests we did. If the resulting values are greated than 1 they will be clipped to 1.
- for Benjamini-Hochberg correction, we set method='fdr_bh'
**(2 pts)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Bonferroni method? What is the revised p-value threshold?
**(1 pt)** Would the gene we tested before, IL7R, pass this threshold?
#Answers
14)
- 63 genes pass the significance level of 0.05(after correcting multiple testing using the bonferroni method)
- new p value threshold: 0.05/13714 = 3.6 * 10^-6
- uses 13714 - alpha = alpha / k
15)
- Yes it would, with the corrected p-value greater than 1
```
#multiple testing correction
#bonferroni
from statsmodels.stats.multitest import multipletests
results['p.adj.bonferroni']=multipletests(results['p'], method='bonferroni')[1]
FDR=ALPHA
plt.hist(results['p'],100)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('Unadjusted p-values')
plt.ylabel('Number of genes')
plt.show()
plt.hist(results['p.adj.bonferroni'],100)
#plt.ylim(0,200)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('P-values (Bonferroni corrected)')
plt.ylabel('Number of genes')
plt.show()
plt.show()
print('DE Bonferroni',(results['p.adj.bonferroni']<=FDR).sum())
```
**(1 pt)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Benjamini-Hochberg method?
#Answers
16)
- 220
```
results['p.adj.bh']=multipletests(results['p'], method='fdr_bh')[1]
FDR=0.05
plt.hist(results['p'],100)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('Unadjusted p-values')
plt.ylabel('Number of genes')
plt.show()
plt.hist(results['p.adj.bh'],100)
plt.ylim(0,2000)
plt.axvline(x=FDR,color='red',linestyle='--')
plt.xlabel('P-values (Benjamini-Hochberg corrected)')
plt.ylabel('Number of genes')
plt.show()
print('DE BH',(results['p.adj.bh']<=FDR).sum())
```
**(1 pt)** Which multiple testing correction is the most stringent?
Finally, let's look at our results. Print the significant differential genes and look up a few on the internet.
#Answers
17)
- Bonferroni, and this is because the corrected p values resulted in values of 1 or greater, so it was limited to 1
```
results.loc[results['p.adj.bonferroni']<=FDR,:].sort_values(by='lfc')
```
For example, CD7 is a gene found on T cells, whereas HLA genes are found on B cells.
| github_jupyter |
##### Copyright 2018 The TF-Agents Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# REINFORCE agent
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/6_reinforce_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/6_reinforce_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Introduction
This example shows how to train a [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf) agent on the Cartpole environment using the TF-Agents library, similar to the [DQN tutorial](1_dqn_tutorial.ipynb).

We will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.
## Setup
If you haven't installed the following dependencies, run:
```
!sudo apt-get install -y xvfb ffmpeg
!pip install gym
!pip install 'imageio==2.4.0'
!pip install PILLOW
!pip install 'pyglet==1.3.2'
!pip install pyvirtualdisplay
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.reinforce import reinforce_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
```
## Hyperparameters
```
env_name = "CartPole-v0" # @param {type:"string"}
num_iterations = 250 # @param {type:"integer"}
collect_episodes_per_iteration = 2 # @param {type:"integer"}
replay_buffer_capacity = 2000 # @param {type:"integer"}
fc_layer_params = (100,)
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 25 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 50 # @param {type:"integer"}
```
## Environment
Environments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using `suites`. We have different `suites` for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.
Now let us load the CartPole environment from the OpenAI Gym suite.
```
env = suite_gym.load(env_name)
```
We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.
```
#@test {"skip": true}
env.reset()
PIL.Image.fromarray(env.render())
```
The `time_step = environment.step(action)` statement takes `action` in the environment. The `TimeStep` tuple returned contains the environment's next observation and reward for that action. The `time_step_spec()` and `action_spec()` methods in the environment return the specifications (types, shapes, bounds) of the `time_step` and `action` respectively.
```
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
```
So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the `action_spec` is a scalar where 0 means "move left" and 1 means "move right."
```
time_step = env.reset()
print('Time step:')
print(time_step)
action = np.array(1, dtype=np.int32)
next_time_step = env.step(action)
print('Next time step:')
print(next_time_step)
```
Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the `TFPyEnvironment` wrapper. The original environment's API uses numpy arrays, the `TFPyEnvironment` converts these to/from `Tensors` for you to more easily interact with TensorFlow policies and agents.
```
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
```
## Agent
The algorithm that we use to solve an RL problem is represented as an `Agent`. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of `Agents` such as [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), [DDPG](https://arxiv.org/pdf/1509.02971.pdf), [TD3](https://arxiv.org/pdf/1802.09477.pdf), [PPO](https://arxiv.org/abs/1707.06347) and [SAC](https://arxiv.org/abs/1801.01290).
To create a REINFORCE Agent, we first need an `Actor Network` that can learn to predict the action given an observation from the environment.
We can easily create an `Actor Network` using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the `fc_layer_params` argument set to a tuple of `ints` representing the sizes of each hidden layer (see the Hyperparameters section above).
```
actor_net = actor_distribution_network.ActorDistributionNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params)
```
We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
```
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
train_env.time_step_spec(),
train_env.action_spec(),
actor_network=actor_net,
optimizer=optimizer,
normalize_returns=True,
train_step_counter=train_step_counter)
tf_agent.initialize()
```
## Policies
In TF-Agents, policies represent the standard notion of policies in RL: given a `time_step` produce an action or a distribution over actions. The main method is `policy_step = policy.step(time_step)` where `policy_step` is a named tuple `PolicyStep(action, state, info)`. The `policy_step.action` is the `action` to be applied to the environment, `state` represents the state for stateful (RNN) policies and `info` may contain auxiliary information such as log probabilities of the actions.
Agents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy).
```
eval_policy = tf_agent.policy
collect_policy = tf_agent.collect_policy
```
## Metrics and Evaluation
The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
```
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# Please also see the metrics module for standard implementations of different
# metrics.
```
## Replay Buffer
In order to keep track of the data collected from the environment, we will use the TFUniformReplayBuffer. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using `tf_agent.collect_data_spec`.
```
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=tf_agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
```
For most agents, the `collect_data_spec` is a `Trajectory` named tuple containing the observation, action, reward etc.
## Data Collection
As REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer.
```
#@test {"skip": true}
def collect_episode(environment, policy, num_episodes):
episode_counter = 0
environment.reset()
while episode_counter < num_episodes:
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
if traj.is_boundary():
episode_counter += 1
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
```
## Training the agent
The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
The following will take ~3 minutes to run.
```
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
tf_agent.train = common.function(tf_agent.train)
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few episodes using collect_policy and save to the replay buffer.
collect_episode(
train_env, tf_agent.collect_policy, collect_episodes_per_iteration)
# Use data from the buffer and update the agent's network.
experience = replay_buffer.gather_all()
train_loss = tf_agent.train(experience)
replay_buffer.clear()
step = tf_agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
```
## Visualization
### Plots
We can plot return vs global steps to see the performance of our agent. In `Cartpole-v0`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200.
```
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=250)
```
### Videos
It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
```
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
```
The following code visualizes the agent's policy for a few episodes:
```
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = tf_agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
```
| github_jupyter |
## Querying Nexus knowledge graph using SPARQL
The goal of this notebook is to learn the basics of SPARQL. Only the READ part of SPARQL will be exposed.
## Prerequisites
This notebook assumes you've created a project within the AWS deployment of Nexus. If not follow the Blue Brain Nexus [Quick Start tutorial](https://bluebrain.github.io/nexus/docs/tutorial/getting-started/quick-start/index.html).
## Overview
You'll work through the following steps:
1. Create a sparql wrapper around your project's SparqlView
2. Explore and navigate data using the SPARQL query language
## Step 1: Create a sparql wrapper around your project's SparqlView
Every project in Blue Brain Nexus comes with a SparqlView enabling to navigate the data as a graph and to query it using the W3C SPARQL Language. The address of such SparqlView is https://nexus-sandbox.io/v1/views/tutorialnexus/\$PROJECTLABEL/graph/sparql for a project withe label \$PROJECTLABEL. The address of a SparqlView is also called a **SPARQL endpoint**.
```
#Configuration for the Nexus deployment
nexus_deployment = "https://nexus-sandbox.io/v1"
token= "your token here"
org ="tutorialnexus"
project ="$PROJECTLABEL"
headers = {}
#Let install sparqlwrapper which a python wrapper around sparql client
!pip install git+https://github.com/RDFLib/sparqlwrapper
# Utility functions to create sparql wrapper around a sparql endpoint
from SPARQLWrapper import SPARQLWrapper, JSON, POST, GET, POSTDIRECTLY, CSV
import requests
def create_sparql_client(sparql_endpoint, http_query_method=POST, result_format= JSON, token=None):
sparql_client = SPARQLWrapper(sparql_endpoint)
#sparql_client.addCustomHttpHeader("Content-Type", "application/sparql-query")
if token:
sparql_client.addCustomHttpHeader("Authorization","Bearer {}".format(token))
sparql_client.setMethod(http_query_method)
sparql_client.setReturnFormat(result_format)
if http_query_method == POST:
sparql_client.setRequestMethod(POSTDIRECTLY)
return sparql_client
# Utility functions
import pandas as pd
pd.set_option('display.max_colwidth', -1)
# Convert SPARQL results into a Pandas data frame
def sparql2dataframe(json_sparql_results):
cols = json_sparql_results['head']['vars']
out = []
for row in json_sparql_results['results']['bindings']:
item = []
for c in cols:
item.append(row.get(c, {}).get('value'))
out.append(item)
return pd.DataFrame(out, columns=cols)
# Send a query using a sparql wrapper
def query_sparql(query, sparql_client):
sparql_client.setQuery(query)
result_object = sparql_client.query()
if sparql_client.returnFormat == JSON:
return result_object._convertJSON()
return result_object.convert()
# Let create a sparql wrapper around the project sparql view
sparqlview_endpoint = nexus_deployment+"/views/"+org+"/"+project+"/graph/sparql"
sparqlview_wrapper = create_sparql_client(sparql_endpoint=sparqlview_endpoint, token=token,http_query_method= POST, result_format=JSON)
```
## Step 2: Explore and navigate data using the SPARQL query language
Let write our first query.
```
select_all_query = """
SELECT ?s ?p ?o
WHERE
{
?s ?p ?o
}
OFFSET 0
LIMIT 5
"""
nexus_results = query_sparql(select_all_query,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head()
```
Most SPARQL queries you'll see will have the anotomy above with:
* a **SELECT** clause that let you select the variables you want to retrieve
* a **WHERE** clause defining a set of constraints that the variables should satisfy to be retrieved
* **LIMIT** and **OFFSET** clauses to enable pagination
* the constraints are usually graph patterns in the form of **triple** (?s for subject, ?p for property and ?o for ?object)
Multiple triples can be provided as graph pattern to match but each triple should end with a period. As an example, let retrieve 5 movies (?movie) along with their titles (?title).
```
movie_with_title = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ?movie ?title
WHERE {
?movie a vocab:Movie.
?movie vocab:title ?title.
} LIMIT 5
"""%(org,project)
nexus_results = query_sparql(movie_with_title,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head()
```
Note PREFIX clauses. It is way to shorten URIS within a SPARQL query. Without them we would have to use full URI for all properties.
The ?movie variable is bound to a URI (the internal Nexus id). Let retrieve the movieId just like in the MovieLens csv files for simplicity.
```
movie_with_title = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ?movieId ?title
WHERE {
# Select movies
?movie a vocab:Movie.
# Select their movieId value
?movie vocab:movieId ?movieId.
#
?movie vocab:title ?title.
} LIMIT 5
"""%(org,project)
nexus_results = query_sparql(movie_with_title,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head()
```
In the above query movies are things (or entities) of type vocab:Movie.
This is a typical instance query where entities are filtered by their type(s) and then some of their properties are retrieved (here ?title).
Let retrieve everything that is linked (outgoing) to the movies.
The * character in the SELECT clause indicates to retreve all variables: ?movie, ?p, ?o
```
movie_with_properties = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select *
WHERE {
?movie a vocab:Movie.
?movie ?p ?o.
} LIMIT 20
"""%(org,project)
nexus_results = query_sparql(movie_with_properties,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head(20)
```
As a little exercise, write a query retrieving incoming entities to movies. You can copy past the query above and modify it.
Hints: ?s ?p ?o can be read as: ?o is linked to ?s with an outgoing link.
Do you have results ?
```
#Your query here
```
Let retrieve the movie ratings
```
movie_with_properties = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ?userId ?movieId ?rating ?timestamp
WHERE {
?movie a vocab:Movie.
?movie vocab:movieId ?movieId.
?ratingNode vocab:movieId ?ratingmovieId.
?ratingNode vocab:rating ?rating.
?ratingNode vocab:userId ?userId.
?ratingNode vocab:timestamp ?timestamp.
# Somehow pandas is movieId as double for rating
FILTER(xsd:integer(?ratingmovieId) = ?movieId)
} LIMIT 20
"""%(org,project)
nexus_results = query_sparql(movie_with_properties,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head(20)
```
As a little exercise, write a query retrieving the movie tags along with the user id and timestamp. You can copy and past the query above and modify it.
```
#Your query here
```
### Aggregate queries
[Aggregates](https://www.w3.org/TR/sparql11-query/#aggregates) apply some operations over a group of solutions.
Available aggregates are: COUNT, SUM, MIN, MAX, AVG, GROUP_CONCAT, and SAMPLE.
We will not see them all but we'll look at some examples.
The next query will compute the average rating score for 'funny' movies.
```
tag_value = "funny"
movie_avg_ratings = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ( AVG(?ratingvalue) AS ?score)
WHERE {
# Select movies
?movie a vocab:Movie.
# Select their movieId value
?movie vocab:movieId ?movieId.
?tag vocab:movieId ?movieId.
?tag vocab:tag ?tagvalue.
FILTER(?tagvalue = "%s").
# Keep movies with ratings
?rating vocab:movieId ?ratingmovidId.
FILTER(xsd:integer(?ratingmovidId) = xsd:integer(?movieId))
?rating vocab:rating ?ratingvalue.
}
""" %(org,project,tag_value)
nexus_results = query_sparql(movie_avg_ratings,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
display(nexus_df.head(20))
nexus_df=nexus_df.astype(float)
```
Retrieve the number of tags per movie. Can be a little bit slow depending on the size of your data.
```
nbr_tags_per_movie = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ?title (COUNT(?tagvalue) as ?tagnumber)
WHERE {
# Select movies
?movie a vocab:Movie.
# Select their movieId value
?movie vocab:movieId ?movieId.
?tag a vocab:Tag.
?tag vocab:movieId ?tagmovieId.
FILTER(?tagmovieId = ?movieId)
?movie vocab:title ?title.
?tag vocab:tag ?tagvalue.
}
GROUP BY ?title
ORDER BY DESC(?tagnumber)
LIMIT 10
""" %(org,project)
nexus_results = query_sparql(nbr_tags_per_movie,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
display(nexus_df.head(20))
#Let plot the result
nexus_df.tagnumber = pd.to_numeric(nexus_df.tagnumber)
nexus_df.plot(x="title",y="tagnumber",kind="bar")
```
The next query will retrieve movies along with users that tagged them separated by a comma
```
# Group Concat
movie_tag_users = """
PREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>
PREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>
Select ?movieId (group_concat(DISTINCT ?userId;separator=",") as ?users)
WHERE {
# Select movies
?movie a vocab:Movie.
# Select their movieId value
?movie vocab:movieId ?movieId.
?tag vocab:movieId ?movieId.
?tag vocab:userId ?userId.
}
GROUP BY ?movieId
LIMIT 10
"""%(org,project)
nexus_results = query_sparql(movie_tag_users,sparqlview_wrapper)
nexus_df =sparql2dataframe(nexus_results)
nexus_df.head(20)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm import tqdm
tqdm.pandas()
import os, time, datetime
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, f1_score, roc_curve, auc
import lightgbm as lgb
import xgboost as xgb
def format_time(elapsed):
'''
Takes a time in seconds and returns a string hh:mm:ss
'''
# Round to the nearest second.
elapsed_rounded = int(round((elapsed)))
# Format as hh:mm:ss
return str(datetime.timedelta(seconds=elapsed_rounded))
class SigirPreprocess():
def __init__(self, text_data_path):
self.text_data_path = text_data_path
self.train = None
self.dict_code_to_id = {}
self.dict_id_to_code = {}
self.list_tags = {}
self.sentences = []
self.labels = []
self.text_col = None
self.X_test = None
def prepare_data(self ):
catalog_eng= pd.read_csv(self.text_data_path+"data/catalog_english_taxonomy.tsv",sep="\t")
X_train= pd.read_csv(self.text_data_path+"data/X_train.tsv",sep="\t")
Y_train= pd.read_csv(self.text_data_path+"data/Y_train.tsv",sep="\t")
self.list_tags = list(Y_train['Prdtypecode'].unique())
for i,tag in enumerate(self.list_tags):
self.dict_code_to_id[tag] = i
self.dict_id_to_code[i]=tag
print(self.dict_code_to_id)
Y_train['labels']=Y_train['Prdtypecode'].map(self.dict_code_to_id)
train=pd.merge(left=X_train,right=Y_train,
how='left',left_on=['Integer_id','Image_id','Product_id'],
right_on=['Integer_id','Image_id','Product_id'])
prod_map=pd.Series(catalog_eng['Top level category'].values,
index=catalog_eng['Prdtypecode']).to_dict()
train['product'] = train['Prdtypecode'].map(prod_map)
train['title_len']=train['Title'].progress_apply(lambda x : len(x.split()) if pd.notna(x) else 0)
train['desc_len']=train['Description'].progress_apply(lambda x : len(x.split()) if pd.notna(x) else 0)
train['title_desc_len']=train['title_len'] + train['desc_len']
train.loc[train['Description'].isnull(), 'Description'] = " "
train['title_desc'] = train['Title'] + " " + train['Description']
self.train = train
def get_sentences(self, text_col, remove_null_rows=False):
self.text_col = text_col
if remove_null_rows==True:
new_train = self.train[self.train[text_col].notnull()]
else:
new_train = self.train.copy()
self.sentences = new_train[text_col].values
self.labels = new_train['labels'].values
def prepare_test(self, text_col, test_data_path, phase=1):
X_test=pd.read_csv(test_data_path+f"data/x_test_task1_phase{phase}.tsv",sep="\t")
X_test.loc[X_test['Description'].isnull(), 'Description'] = " "
X_test['title_desc'] = X_test['Title'] + " " + X_test['Description']
self.X_test = X_test
self.test_sentences = X_test[text_col].values
text_col = 'title_desc'
val_size = 0.1
random_state=2020
num_class = 27
do_gridsearch = False
kwargs = {'add_logits':['cam', 'fla']}
cam_path = '/../input/camembert-vec-256m768-10ep/'
flau_path = '/../input/flaubertlogits2107/'
res_path = '/../input/resnextfinal/'
cms_path = '/../input/crossmodal-v0/'
vca_path = '/../input/vec-concat-9093/'
vca_path_phase2 = '/../input/predictions-test-phase2-vec-fusion/'
aem_path = '/../input/addition-ensemble-latest/'
val_logits_path = {'cam':cam_path + 'validation_set_softmax_logits.npy',
'fla':flau_path + 'validation_set_softmax_logits.npy',
'res':res_path + 'Valid_resnext50_32x4d_phase1_softmax_logits.npy',
'vca':vca_path + 'softmax_logits_val_9093.npy',
'aem':aem_path + 'softmax_logits_val_add.npy'}
test_logits_path_phase1 = {'cam':cam_path+f'X_test_phase1_softmax_logits.npy',
'fla':flau_path + f'X_test_phase1_softmax_logits.npy',
'res':res_path + f'Test_resnext50_32x4d_phase1_softmax_logits.npy',
'vca':vca_path + f'softmax_logits_test_9093.npy'}
test_logits_path_phase2 = {'cam':cam_path+f'X_test_phase2_softmax_logits.npy',
'fla':flau_path + f'X_test_phase2_softmax_logits.npy',
'res':res_path + f'Test_resnext50_32x4d_phase2_softmax_logits.npy',
'vca':vca_path_phase2 + f'softmax_logits_test_phase2_9093.npy'}
## Get valdation dataset from original train dataset
Preprocess = SigirPreprocess("/../input/textphase1/")
Preprocess.prepare_data()
Preprocess.get_sentences(text_col, True)
full_data = Preprocess.train
labels = Preprocess.labels
index = full_data.Integer_id
tr_index, val_index, tr_labels, val_labels = train_test_split(index, labels,
stratify=labels,
random_state=random_state,
test_size=val_size)
train_data = full_data.loc[tr_index, :]
train_data.reset_index(inplace=True, drop=True)
val_data = full_data.loc[val_index, :]
val_data.reset_index(inplace=True, drop=True)
full_data.loc[val_index, 'sample'] = 'val'
full_data['sample'].fillna('train', inplace=True)
def preparelogits_df(logit_paths, df=None, val_labels=None, **kwargs):
### Prepare and combine Logits data with original validation dataset
logits_dict = {}
dfs_dict = {}
for key, logit_path in logit_paths.items():
logits_dict[key] = np.load(logit_path)
dfs_dict[key] = pd.DataFrame(logits_dict[key],
columns=[key + "_" + str(i) for i in range(1,28)])
print("Shape of logit arrays: {}", logits_dict[key].shape)
if kwargs['add_logits']:
if len(kwargs['add_logits'])>0:
add_str = '_'.join(kwargs['add_logits'])
logits_dict[add_str] = logits_dict[kwargs['add_logits'][0]]
for k in kwargs['add_logits'][1:]:
logits_dict[add_str] += logits_dict[k]
logits_dict[add_str] = logits_dict[add_str]/len(kwargs['add_logits'])
dfs_dict[add_str] = pd.DataFrame(logits_dict[add_str],
columns=[add_str + "_" + str(i) for i in range(1,28)])
print("Shape of logit arrays: {}", logits_dict[add_str].shape)
if type(val_labels) == np.ndarray:
for key,logits in logits_dict.items():
print("""Validation F1 scores for {} logits: {} """.format(key,
f1_score(val_labels, np.argmax(logits, axis=1), average='macro')))
df = pd.concat([df] + list(dfs_dict.values()), axis=1)
return df
val_data = preparelogits_df(val_logits_path, df=val_data,
val_labels=val_labels, **kwargs)
```
# Model Data Prep
```
df_log = val_data.copy()
probas_cols = ["fla_" + str(i) for i in range(1,28)] + ["cam_" + str(i) for i in range(1,28)] +\
["res_" + str(i) for i in range(1,28)] \
+ ["vca_" + str(i) for i in range(1,28)] \
X = df_log[probas_cols]
y = df_log['labels'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2, random_state=random_state)
from scipy.stats import randint as sp_randint
from scipy.stats import uniform as sp_uniform
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
n_HP_points_to_test = 100
param_test ={'num_leaves': sp_randint(6, 50),
'min_child_samples': sp_randint(100, 500),
'min_child_weight': [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],
'subsample': sp_uniform(loc=0.2, scale=0.8),
'colsample_bytree': sp_uniform(loc=0.4, scale=0.6),
'reg_alpha': [0, 1e-1, 1, 2, 5, 7, 10, 50, 100],
'reg_lambda': [0, 1e-1, 1, 5, 10, 20, 50, 100],
# "bagging_fraction" : [0.5, 0.6, 0.7, 0.8, 0.9],
# "feature_fraction":[0.5, 0.6, 0.7, 0.8, 0.9]
}
fit_params={
"early_stopping_rounds":100,
"eval_metric" : 'multi_logloss',
"eval_set" : [(X_test,y_test)],
'eval_names': ['valid'],
#'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_010_decay_power_099)],
'verbose': 100,
'categorical_feature': 'auto'}
clf = lgb.LGBMClassifier(num_iteration=1000, max_depth=-1, random_state=314, silent=True,
metric='multi_logloss', n_jobs=4, early_stopping_rounds=100,
num_class=num_class, objective= "multiclass")
gs = RandomizedSearchCV(
estimator=clf, param_distributions=param_test,
n_iter=n_HP_points_to_test,
cv=3,
refit=True,
random_state=314,
verbose=True)
if do_gridsearch==True:
gs.fit(X_train, y_train, **fit_params)
print('Best score reached: {} with params: {} '.format(gs.best_score_, gs.best_params_))
# opt_parameters = gs.best_params_
opt_parameters = {'colsample_bytree': 0.5284213741879101, 'min_child_samples': 125,
'min_child_weight': 10.0, 'num_leaves': 22,
'reg_alpha': 0.1, 'reg_lambda': 20, 'subsample': 0.3080033455431848}
```
# Model Training
```
### Run lightgbm to get weights for different class logits
t0 = time.time()
model_met = 'fit' #'xgb'#'train' #fit
params = {
"objective" : "multiclass",
"num_class" : num_class,
"num_leaves" : 60,
"max_depth": -1,
"learning_rate" : 0.01,
"bagging_fraction" : 0.9, # subsample
"feature_fraction" : 0.9, # colsample_bytree
"bagging_freq" : 5, # subsample_freq
"bagging_seed" : 2018,
"verbosity" : -1 }
lgtrain, lgval = lgb.Dataset(X_train, y_train), lgb.Dataset(X_test, y_test)
if model_met == 'train':
params.update(opt_parameters)
params.update(fit_params)
lgbmodel = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval],
num_iterations = 1000, metric= 'multi_logloss')
train_logits = lgbmodel.predict(X_train)
test_logits = lgbmodel.predict(X_test)
train_pred = np.argmax(train_logits, axis=1)
test_pred = np.argmax(test_logits, axis=1)
elif model_met == 'xgb':
dtrain = xgb.DMatrix(X_train, label=y_train)
dtrain.save_binary('xgb_train.buffer')
dtest = xgb.DMatrix(X_test, label=y_test)
num_round = 200
xgb_param = {'max_depth': 5, 'eta': 0.1, 'seed':2020, 'verbosity':1,
'objective': 'multi:softmax', 'num_class':num_class}
xgb_param['nthread'] = 4
xgb_param['eval_metric'] = 'mlogloss'
evallist = [(dtest, 'eval'), (dtrain, 'train')]
bst = xgb.train(xgb_param, dtrain, num_round, evallist
, early_stopping_rounds=10
)
train_logits = bst.predict(xgb.DMatrix(X_train), ntree_limit=bst.best_ntree_limit)
test_logits = bst.predict(xgb.DMatrix(X_test), ntree_limit=bst.best_ntree_limit)
train_pred = train_logits
test_pred = test_logits
else:
lgbmodel = lgb.LGBMClassifier(**clf.get_params())
#set optimal parameters
lgbmodel.set_params(**opt_parameters)
lgbmodel.fit(X_train, y_train, **fit_params)
train_logits = lgbmodel.predict(X_train)
test_logits = lgbmodel.predict(X_test)
train_pred = train_logits
test_pred = test_logits
print("Validation F1: {} and Training F1: {} ".format(
f1_score(y_test, test_pred, average='macro'),
f1_score(y_train, train_pred, average='macro')))
if model_met == 'train':
feat_imp = pd.DataFrame({'feature':probas_cols,
'logit_kind': [i.split('_')[0] for i in probas_cols],
'imp':lgbmodel.feature_importance()/sum(lgbmodel.feature_importance())})
lgbmodel.save_model('lgb_classifier_81feats.txt', num_iteration=lgbmodel.best_iteration)
print("""Feature Importances by logits group:
""", feat_imp.groupby(['logit_kind'])['imp'].sum())
else:
feat_imp = pd.DataFrame({'feature':probas_cols,
'logit_kind': [i.split('_')[0] for i in probas_cols],
'imp':lgbmodel.feature_importances_/sum(lgbmodel.feature_importances_)})
print("""Feature Importances by logits group:
""", feat_imp.groupby(['logit_kind'])['imp'].sum())
import shap
explainer = shap.TreeExplainer(lgbmodel)
shap_values = explainer.shap_values(X)
print("Time Elapsed: {:}.".format(format_time(time.time() - t0)))
for n, path in enumerate(['/kaggle/input/textphase1/',
'/kaggle/input/testphase2/']):
phase = n+1
if phase==1:
test_logits_path = test_logits_path_phase1
else:
test_logits_path = test_logits_path_phase2
Preprocess.prepare_test(text_col, path, phase)
X_test_phase1= Preprocess.X_test
test_phase1 = preparelogits_df(test_logits_path,
df=X_test_phase1, val_labels=None, **kwargs)
phase1_logits = lgbmodel.predict(test_phase1[probas_cols].values)
if model_met == 'train':
predictions = np.argmax(phase1_logits, axis=1)
elif model_met == 'xgb':
phase1_logits = bst.predict(xgb.DMatrix(test_phase1[probas_cols]),
ntree_limit=bst.best_ntree_limit)
predictions = phase1_logits
else:
predictions = phase1_logits
X_test_phase1['prediction_model']= predictions
X_test_phase1['Prdtypecode']=X_test_phase1['prediction_model'].map(Preprocess.dict_id_to_code)
print(X_test_phase1['Prdtypecode'].value_counts())
X_test_phase1=X_test_phase1.drop(['prediction_model','Title','Description'],axis=1)
X_test_phase1.to_csv(f'y_test_task1_phase{phase}_pred_.tsv',sep='\t',index=False)
```
| github_jupyter |
# Example usage of the O-C tools
## This example shows how to construct and fit with MCMC the O-C diagram of the RR Lyrae star OGLE-BLG-RRLYR-02950
### We start with importing some libraries
```
import numpy as np
import oc_tools as octs
```
### We read in the data, set the period used to construct the O-C diagram (and to fold the light curve to construct the template curves, etc.), and the orders of the Fourier series we will fit to the light curve in the first and second iterations in the process
```
who = "06498"
period = 0.589490
order1 = 10
order2 = 15
jd3, mag3 = np.loadtxt('data/{:s}.o3'.format(who), usecols=[0,1], unpack=True)
jd4, mag4 = np.loadtxt('data/{:s}.o4'.format(who), usecols=[0,1], unpack=True)
```
### We correct for possible average magnitude and amplitude differences between The OGLE-III and IV photometries by moving the intensity average of the former to the intensity average measured for the latter
### The variables "jd" and "mag" contain the merged timings and magnitudes of the OGLE-III + IV photometry, wich are used from hereon to calculate the O-C values
```
mag3_shift=octs.shift_int(jd3, mag3, jd4, mag4, order1, period, plot=True)
jd = np.hstack((jd3,jd4))
mag = np.hstack((mag3_shift, mag4))
```
### Calling the split_lc_seasons() function provides us with an array containing masks splitting the combined light curve into short sections, depending on the number of points
### Optionally, the default splitting can be overriden by using the optional parameters "limits" and "into". For example, calling the function as:
octs.split_lc_seasons(jd, plot=True, mag = mag, limits = np.array((0, 8, np.inf)), into = np.array((0, 2)))
### will always split seasons with at least nine points into two separate segments
```
splits = octs.split_lc_seasons(jd, plot=True, mag = mag)
```
### The function calc_oc_points() fits the light curve of the variable to produce a template, and uses it to determine the O-C points of the individual segments
```
oc_jd, oc_oc = octs.calc_oc_points(jd, mag, period, order1, splits, figure=True)
```
### We make a guess at the binary parameters
```
e = 0.37
P_orb = 2800.
T_peri = 6040
a_sini = 0.011
omega = -0.7
a= -8e-03
b= 3e-06
c= -3.5e-10
params = np.asarray((e, P_orb, T_peri, a_sini, omega, a, b, c))
lower_bounds = np.array((0., 100., -np.inf, 0.0, -np.inf, -np.inf, -np.inf, -np.inf))
upper_bounds = np.array((0.99, 6000., np.inf, 1.0, np.inf, np.inf, np.inf, np.inf))
```
### We use the above guesses as the starting point (dashed grey line on the plot below) to find the O-C LTTE solution of the first iteration of our procedure. The yellow line on the plot shows the fit. The vertical blue bar shows the timing of the periastron passage
### Note that in this function also provides the timings of the individual observations corrected for this initial O-C solution
```
params2, jd2 = octs.fit_oc1(oc_jd, oc_oc, jd, params, lower_bounds, upper_bounds)
```
### We use the initial solution as the starting point for the MCMC fit, therefore we prepare it first by transforming $e$ and $\omega$ to $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$
### For each parameter, we also have a lower and higher limit in its prior, but the values given for $\sqrt{e}\sin{\omega}$ and $\sqrt{e}\sin{\omega}$ are ignored, as these are handled separately within the function checking the priors
```
start = np.zeros_like(params2)
start[0:3] = params2[1:4]
start[3] = np.sqrt(params2[0]) * np.sin(params2[4])
start[4] = np.sqrt(params2[0]) * np.cos(params2[4])
start[5:] = params2[5:]
prior_ranges = np.asanyarray([[start[0]*0.9, start[0]*1.1],
[start[1]-start[0]/4., start[1]+start[0]/4.],
[0., 0.057754266],
[0., 0.],
[0., 0.],
[-1., 1.],
[-1e-4, 1e-4],
[-1e-8, 1e-8]])
```
### We set a random seed to get reproducible results, then prepare the initial positions of the 200 walkers we are using during the fitting. During this, we check explicitly that these correspond to a position with a finite prior (i.e., they are not outside of the prior ranges defined above)
```
np.random.seed(0)
walkers = 200
random_scales = np.array((1e+1, 1e+1, 1e-4, 1e-2, 1e-2, 1e-3, 2e-7, 5e-11))
pos = np.zeros((walkers, start.size))
for i in range(walkers):
pos[i,:] = start + random_scales * np.random.normal(size=8)
while np.isinf(octs.log_prior(pos[i,:], prior_ranges)):
pos[i,:] = start + random_scales * np.random.normal(size=8)
```
### We recalculate the O-C points, but this time we use a higher-order Fourier series to fit the light curve with the modified timings, and we also calculate errors using bootstrapping
```
oc_jd, oc_oc, oc_sd = octs.calc_oc_points(jd, mag, period, order2, splits,
bootstrap_times = 500, jd_mod = jd2,
figure=True)
```
### We fit the O-C points measured above using MCMC by calling the run_mcmc() function
### We plot both the fit, as well as the triangle plot showing the two- (and one-)dimensional posterior distributions (these can be suppressed by setting the optional parameters "plot_oc" and "plot_triangle" to False)
```
sampler, fit_mcmc, oc_sigmas, param_means, param_sigmas, fit_at_points, K =\
octs.run_mcmc(oc_jd, oc_oc, oc_sd,
prior_ranges, pos,
nsteps = 31000, discard = 1000,
thin = 300, processes=1)
```
## The estimated LTTE parameters are:
```
print("Orbital period: {:d} +- {:d} [d]".format(int(param_means[0]),
int(param_sigmas[0])))
print("Projected semi-major axis: {:.3f} +- {:.3f} [AU]".format(param_means[2]*173.144633,
param_sigmas[2]*173.144633))
print("Eccentricity: {:.3f} +- {:.3f}".format(param_means[3],
param_sigmas[3]))
print("Argumen of periastron: {:+4d} +- {:d} [deg]".format(int(param_means[4]*180/np.pi),
int(param_sigmas[4]*180/np.pi)))
print("Periastron passage time: {:d} +- {:d} [HJD-2450000]".format(int(param_means[1]),
int(param_sigmas[1])))
print("Period-change rate: {:+.3f} +- {:.3f} [d/Myr] ".format(param_means[7]*365.2422*2e6*period,
param_sigmas[7]*365.2422*2e6*period))
print("RV semi-amplitude: {:5.2f} +- {:.2f} [km/s]".format(K[0], K[1]))
print("Mass function: {:.5f} +- {:.5f} [M_Sun]".format(K[2], K[3]))
```
| github_jupyter |
```
# load libraries
import xarray as xr
import numpy as np
from argopy import DataFetcher as ArgoDataFetcher
from datetime import datetime, timedelta
import pandas as pd
# User defined functions:
def get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_in,time_f):
"""Function to get argo data for a given lat,lon box (using Argopy),
and return a 2D array collection of vertical profile for the given region
Parameters
----------
llon : int
left longitude
rlon : int
right longtidue
ulat : int
upper latitude
llat : int
lower latitude
time_in : str/datetime object
the start time of desired range, formatted Y-m-d
time_f : str/datetime object
the end time of desired range, formatted Y-m-d
Returns
---------
xarray
The result is a xarray of the vertical profile for the given range and region.
"""
ds_points = ArgoDataFetcher(src='erddap').region([llon,rlon, llat,ulat, depthmin, depthmax,time_in,time_f]).to_xarray()
ds_profiles = ds_points.argo.point2profile()
return ds_profiles
def spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end):
"""Function that gets the argo data for given latitude and longitude bounding box
(using Argopy), and given start and end time range to return a 2D array collection of vertical
profile for the given region and time frame
Parameters
----------
llon : int
left longitude
rlon : int
right longtidue
ulat : int
upper latitude
llat : int
lower latitude
time_in : str/datetime object
the start time of desired range, formatted Y-m-d
time_f : str/datetime object
the end time of desired range, formatted Y-m-d
Returns
---------
xarray
The result is a xarray of the vertical profile for the given range and region.
"""
#step
max_dt = timedelta(days = 10)
if isinstance(time_start, str):
time_start = datetime.strptime(time_start,"%Y-%m-%d")
if isinstance(time_end, str):
time_end = datetime.strptime(time_end,"%Y-%m-%d")
if time_end - time_start <= max_dt:
ds = get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)
return ds
else:
early_end = time_start+max_dt
ds = get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,early_end)
print("Retrived data from " + str(time_start) + " to " + str(early_end) + ", retreived " + str(len(ds.N_PROF)) + " profiles")
ds2 = spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax, early_end,time_end)
return xr.concat([ds,ds2],dim='N_PROF')
llon=-90;rlon=0
ulat=70;llat=0
depthmin=0;depthmax=1400
time_start='2014-01-01'
time_end='2020-01-01'
ds=spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)
ds
ds.to_netcdf(str('/home/jovyan/ohw20-proj-pyxpcm/data/2014-Jan_to_oct.nc'))
llon=-90;rlon=0
ulat=70;llat=0
depthmin=0;depthmax=1400
time_start='2014-11-01'
time_end='2020-01-01'
ds=spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)
```
| github_jupyter |
# Consensus Optimization
This notebook contains the code for the toy experiment in the paper [The Numerics of GANs](https://arxiv.org/abs/1705.10461).
```
%load_ext autoreload
%autoreload 2
import tensorflow as tf
from tensorflow.contrib import slim
import numpy as np
import scipy as sp
from scipy import stats
from matplotlib import pyplot as plt
import sys, os
from tqdm import tqdm_notebook
tf.reset_default_graph()
def kde(mu, tau, bbox=[-5, 5, -5, 5], save_file="", xlabel="", ylabel="", cmap='Blues'):
values = np.vstack([mu, tau])
kernel = sp.stats.gaussian_kde(values)
fig, ax = plt.subplots()
ax.axis(bbox)
ax.set_aspect(abs(bbox[1]-bbox[0])/abs(bbox[3]-bbox[2]))
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelbottom='off') # labels along the bottom edge are off
plt.tick_params(
axis='y', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
left='off', # ticks along the bottom edge are off
right='off', # ticks along the top edge are off
labelleft='off') # labels along the bottom edge are off
xx, yy = np.mgrid[bbox[0]:bbox[1]:300j, bbox[2]:bbox[3]:300j]
positions = np.vstack([xx.ravel(), yy.ravel()])
f = np.reshape(kernel(positions).T, xx.shape)
cfset = ax.contourf(xx, yy, f, cmap=cmap)
if save_file != "":
plt.savefig(save_file, bbox_inches='tight')
plt.close(fig)
else:
plt.show()
def complex_scatter(points, bbox=None, save_file="", xlabel="real part", ylabel="imaginary part", cmap='Blues'):
fig, ax = plt.subplots()
if bbox is not None:
ax.axis(bbox)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
xx = [p.real for p in points]
yy = [p.imag for p in points]
plt.plot(xx, yy, 'X')
plt.grid()
if save_file != "":
plt.savefig(save_file, bbox_inches='tight')
plt.close(fig)
else:
plt.show()
# Parameters
learning_rate = 1e-4
reg_param = 10.
batch_size = 512
z_dim = 16
sigma = 0.01
method = 'conopt'
divergence = 'standard'
outdir = os.path.join('gifs', method)
niter = 50000
n_save = 500
bbox = [-1.6, 1.6, -1.6, 1.6]
do_eigen = True
# Target distribution
mus = np.vstack([np.cos(2*np.pi*k/8), np.sin(2*np.pi*k/8)] for k in range(batch_size))
x_real = mus + sigma*tf.random_normal([batch_size, 2])
# Model
def generator_func(z):
net = slim.fully_connected(z, 16)
net = slim.fully_connected(net, 16)
net = slim.fully_connected(net, 16)
net = slim.fully_connected(net, 16)
x = slim.fully_connected(net, 2, activation_fn=None)
return x
def discriminator_func(x):
# Network
net = slim.fully_connected(x, 16)
net = slim.fully_connected(net, 16)
net = slim.fully_connected(net, 16)
net = slim.fully_connected(net, 16)
logits = slim.fully_connected(net, 1, activation_fn=None)
out = tf.squeeze(logits, -1)
return out
generator = tf.make_template('generator', generator_func)
discriminator = tf.make_template('discriminator', discriminator_func)
z = tf.random_normal([batch_size, z_dim])
x_fake = generator(z)
d_out_real = discriminator(x_real)
d_out_fake = discriminator(x_fake)
# Loss
if divergence == 'standard':
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_out_real, labels=tf.ones_like(d_out_real)
))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_out_fake, labels=tf.zeros_like(d_out_fake)
))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_out_fake, labels=tf.ones_like(d_out_fake)
))
elif divergence == 'JS':
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_out_real, labels=tf.ones_like(d_out_real)
))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_out_fake, labels=tf.zeros_like(d_out_fake)
))
d_loss = d_loss_real + d_loss_fake
g_loss = -d_loss
elif divergence == 'indicator':
d_loss = tf.reduce_mean(d_out_real - d_out_fake)
g_loss = -d_loss
else:
raise NotImplementedError
g_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='generator')
d_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='discriminator')
optimizer = tf.train.RMSPropOptimizer(learning_rate, use_locking=True)
# optimizer = tf.train.GradientDescentOptimizer(learning_rate, use_locking=True)
# Compute gradients
d_grads = tf.gradients(d_loss, d_vars)
g_grads = tf.gradients(g_loss, g_vars)
# Merge variable and gradient lists
variables = d_vars + g_vars
grads = d_grads + g_grads
if method == 'simga':
apply_vec = list(zip(grads, variables))
elif method == 'conopt':
# Reguliarizer
reg = 0.5 * sum(
tf.reduce_sum(tf.square(g)) for g in grads
)
# Jacobian times gradiant
Jgrads = tf.gradients(reg, variables)
apply_vec = [
(g + reg_param * Jg, v)
for (g, Jg, v) in zip(grads, Jgrads, variables) if Jg is not None
]
else:
raise NotImplementedError
with tf.control_dependencies([g for (g, v) in apply_vec]):
train_op = optimizer.apply_gradients(apply_vec)
if do_eigen:
jacobian_rows = []
g_grads = tf.gradients(g_loss, g_vars)
g_grads = [-g for g in g_grads]
d_grads = tf.gradients(d_loss, d_vars)
d_grads = [-g for g in d_grads]
for g in tqdm_notebook(g_grads + d_grads):
g = tf.reshape(g, [-1])
len_g = int(g.get_shape()[0])
for i in tqdm_notebook(range(len_g)):
g_row = tf.gradients(g[i], g_vars)
d_row = tf.gradients(g[i], d_vars)
jacobian_rows.append(g_row + d_row)
def get_J(J_rows):
J_rows_linear = [np.concatenate([g.flatten() for g in row]) for row in J_rows]
J = np.array(J_rows_linear)
return J
def process_J(J, save_file, bbox=None):
eig, eigv = np.linalg.eig(J)
eig_real = np.array([p.real for p in eig])
complex_scatter(eig, save_file=save_file, bbox=bbox)
def process_J_conopt(J, reg, save_file, bbox=None):
J2 = J - reg * np.dot(J.T, J)
eig, eigv = np.linalg.eig(J2)
eig_real = np.array([p.real for p in eig])
complex_scatter(eig, save_file=save_file, bbox=bbox)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# Real distribution
x_out = np.concatenate([sess.run(x_real) for i in range(5)], axis=0)
kde(x_out[:, 0], x_out[:, 1], bbox=bbox, cmap='Reds', save_file='gt.png')
if not os.path.exists(outdir):
os.makedirs(outdir)
eigrawdir = os.path.join(outdir, 'eigs_raw')
if not os.path.exists(eigrawdir):
os.makedirs(eigrawdir)
eigdir = os.path.join(outdir, 'eigs')
if not os.path.exists(eigdir):
os.makedirs(eigdir)
eigdir_conopt = os.path.join(outdir, 'eigs_conopt')
if not os.path.exists(eigdir_conopt):
os.makedirs(eigdir_conopt)
ztest = [np.random.randn(batch_size, z_dim) for i in range(5)]
progress = tqdm_notebook(range(niter))
if do_eigen:
J_rows = sess.run(jacobian_rows)
J = get_J(J_rows)
for i in progress:
sess.run(train_op)
d_loss_out, g_loss_out = sess.run([d_loss, g_loss])
if do_eigen and i % 500 == 0:
J[:, :] = 0.
for k in range(10):
J_rows = sess.run(jacobian_rows)
J += get_J(J_rows)/10.
with open(os.path.join(eigrawdir, 'J_%d.npz' % i), 'wb') as f:
np.save(f, J)
progress.set_description('d_loss = %.4f, g_loss =%.4f' % (d_loss_out, g_loss_out))
if i % n_save == 0:
x_out = np.concatenate([sess.run(x_fake, feed_dict={z: zt}) for zt in ztest], axis=0)
kde(x_out[:, 0], x_out[:, 1], bbox=bbox, save_file=os.path.join(outdir,'%d.png' % i))
import re
import glob
import matplotlib
matplotlib.rcParams.update({'font.size': 16})
pattern = r'J_(?P<it>0).npz'
bbox = [-3.5, 0.75, -1.2, 1.2]
eigrawdir = os.path.join(outdir, 'eigs_raw')
if not os.path.exists(eigrawdir):
os.makedirs(eigrawdir)
eigdir = os.path.join(outdir, 'eigs')
if not os.path.exists(eigdir):
os.makedirs(eigdir)
eigdir_conopt = os.path.join(outdir, 'eigs_conopt')
if not os.path.exists(eigdir_conopt):
os.makedirs(eigdir_conopt)
out_files = glob.glob(os.path.join(eigrawdir, '*.npz'))
matches = [re.fullmatch(pattern, os.path.basename(s)) for s in out_files]
matches = [m for m in matches if m is not None]
for m in tqdm_notebook(matches):
it = int(m.group('it'))
J = np.load(os.path.join(eigrawdir, m.group()))
process_J(J, save_file=os.path.join(eigdir, '%d.png' % it), bbox=bbox)
process_J_conopt(J, reg=reg_param, save_file=os.path.join(eigdir_conopt, '%d.png' % it), bbox=bbox)
```
| github_jupyter |
# Getting Started with Tensorflow
```
import tensorflow as tf
# Create TensorFlow object called tensor
hello_constant = tf.constant('Hello World!')
with tf.Session() as sess:
# Run the tf.constant operation in the session
output = sess.run(hello_constant)
print(output);
A = tf.constant(1234)
B = tf.constant([123, 456, 789])
C = tf.constant([ [123, 145, 789], [222, 333, 444] ])
print(A)
# A "TensorFlow Session", as shown above, is an environment for running a graph.
# The session is in charge of allocating the operations to GPU(s) and/or CPU(s), including remote machines.
# Let’s see how you use it.
with tf.Session() as sess:
output = sess.run(A)
print(output)
# Sadly you can’t just set x to your dataset and put it in TensorFlow,
# because over time you'll want your TensorFlow model to take in different datasets with different parameters.
# You need tf.placeholder()!
# tf.placeholder() returns a tensor that gets its value from data passed to the tf.session.run() function,
# allowing you to set the input right before the session runs.
# Use the feed_dict parameter in tf.session.run() to set the placeholder tensor.
# The above example shows the tensor x being set to the string "Hello, world".
# It's also possible to set more than one tensor using feed_dict as shown below.
x = tf.placeholder(tf.string)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: "Hello World"})
x = tf.placeholder(tf.string)
y = tf.placeholder(tf.int32)
z = tf.placeholder(tf.float32)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})
# Applying math
tf.multiply()
tf.subtract()
tf.add()
# Sometimes the inputs have to the casted in that regard
tf.cast(tf.const(1), tf.float64)
# constants and placeholder are not mutable!!!
# --> variables: tf.Variable()
# --> needs to be initialized by tf.global_variables_initializer()
# --> good practice is to randomly initialze weights: tf_truncated_normal()
# Example for classification:
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
# Zero method to initialize any variable with zeros (e.g the bias terms)
bias = tf.Variable(tf.zeros(n_labels))
# Multiplication for matrices: tf.matmul()
# Softmax function
tf.nn.softmax()
# Arbitrary dimension placeholders
# Features and Labels (e.g. for Neural Networks)
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
# Relu function (Activation function)
tf.nn.relu()
# Sticking hidden layers together
# Hidden Layer with ReLU activation function
hidden_layer = tf.add(tf.matmul(features, hidden_weights), hidden_biases)
hidden_layer = tf.nn.relu(hidden_layer)
output = tf.add(tf.matmul(hidden_layer, output_weights), output_biases)
# Variables have to be initialized as well in order to use them in the session
tf.global_variables_initializer()
```
# Build a Neural Network with Tensorflow
```
# Coding example for building a neural network with tensorflow
# Quiz Solution
import tensorflow as tf
output = None
hidden_layer_weights = [
[0.1, 0.2, 0.4],
[0.4, 0.6, 0.6],
[0.5, 0.9, 0.1],
[0.8, 0.2, 0.8]]
out_weights = [
[0.1, 0.6],
[0.2, 0.1],
[0.7, 0.9]]
# Weights and biases
weights = [
tf.Variable(hidden_layer_weights),
tf.Variable(out_weights)]
biases = [
tf.Variable(tf.zeros(3)),
tf.Variable(tf.zeros(2))]
# Input
features = tf.Variable([[1.0, 2.0, 3.0, 4.0], [-1.0, -2.0, -3.0, -4.0], [11.0, 12.0, 13.0, 14.0]])
# TODO: Create Model
hidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])
hidden_layer = tf.nn.relu(hidden_layer)
logits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])
# TODO: save and print session results on variable output
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(logits)
print(output)
```
# Deep Neural Networks in Tensorflow
```
# For stacking muliple layers --> Deep NN
# Store layers weight & bias
weights = {
'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes]))
}
biases = {
'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Example for input an image
#The MNIST data is made up of 28px by 28px images with a single channel.
# The tf.reshape() function above reshapes the 28px by 28px matrices in x into row vectors of 784px.
# tf Graph input
x = tf.placeholder("float", [None, 28, 28, 1])
y = tf.placeholder("float", [None, n_classes])
x_flat = tf.reshape(x, [-1, n_input])
# Builidng the model
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']),\
biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
logits = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
# Define the optimizer and the cost function
# Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# How to run the actual session in TF
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
```
# Saving Variables and trained Models and load them back
You save the particular **session** in a file
```
import tensorflow as tf
# The file path to save the data
save_file = './model.ckpt'
# Two Tensor Variables: weights and bias
weights = tf.Variable(tf.truncated_normal([2, 3]))
bias = tf.Variable(tf.truncated_normal([3]))
# Class used to save and/or restore Tensor Variables
saver = tf.train.Saver()
with tf.Session() as sess:
# Initialize all the Variables
sess.run(tf.global_variables_initializer())
# Show the values of weights and bias
print('Weights:')
print(sess.run(weights))
print('Bias:')
print(sess.run(bias))
# Save the model
saver.save(sess, save_file)
# Loading the variables back
# Remove the previous weights and bias
tf.reset_default_graph()
# Two Variables: weights and bias
weights = tf.Variable(tf.truncated_normal([2, 3]))
bias = tf.Variable(tf.truncated_normal([3]))
# Class used to save and/or restore Tensor Variables
saver = tf.train.Saver()
with tf.Session() as sess:
# Load the weights and bias
saver.restore(sess, save_file)
# Show the values of weights and bias
print('Weight:')
print(sess.run(weights))
print('Bias:')
print(sess.run(bias))
```
### ... same works for models. Just train a NN like shown above and save the session afterwards
# Dropout for regularization in Tensorflow
```
# In tensorflow, dropout is just another "layer" in the model
#During training, a good starting value for keep_prob is 0.5.
#During testing, use a keep_prob value of 1.0 to keep all units and maximize the power of the model.
keep_prob = tf.placeholder(tf.float32) # probability to keep units
hidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])
hidden_layer = tf.nn.relu(hidden_layer)
hidden_layer = tf.nn.dropout(hidden_layer, keep_prob)
logits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])
```
# Convolutinal Neural Network (CNN)
```
# Note the output shape of conv will be [1, 16, 16, 20].
# It's 4D to account for batch size, but more importantly, it's not [1, 14, 14, 20].
# This is because the padding algorithm TensorFlow uses is not exactly the same as the one above.
# An alternative algorithm is to switch padding from 'SAME' to 'VALID'
input = tf.placeholder(tf.float32, (None, 32, 32, 3))
filter_weights = tf.Variable(tf.truncated_normal((8, 8, 3, 20))) # (height, width, input_depth, output_depth)
filter_bias = tf.Variable(tf.zeros(20))
strides = [1, 2, 2, 1] # (batch, height, width, depth)
padding = 'SAME'
conv = tf.nn.conv2d(input, filter_weights, strides, padding) + filter_bias
```
## Example code for constructing a CNN
```
# Load data set
# Batch, scale and one-hot-encode it
# Set Parameters
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(".", one_hot=True, reshape=False)
import tensorflow as tf
# Parameters
learning_rate = 0.00001
epochs = 10
batch_size = 128
# Number of samples to calculate validation and accuracy
# Decrease this if you're running out of memory to calculate accuracy
test_valid_size = 256
# Network Parameters
n_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
# Define and store layers and biases
# Store layers weight & bias
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
'out': tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([n_classes]))}
# Apply Convolution (i.e. create a convolution layer)
# The tf.nn.conv2d() function computes the convolution against weight W
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
#In TensorFlow, strides is an array of 4 elements; the first element in this array indicates the stride for batch
#and last element indicates stride for features.
#It's good practice to remove the batches or features you want to skip from the data set rather than use a stride to skip them.
#You can always set the first and last element to 1 in strides in order to use all batches and features.
#The middle two elements are the strides for height and width respectively.
#I've mentioned stride as one number because you usually have a square stride where height = width.
#When someone says they are using a stride of 3, they usually mean tf.nn.conv2d(x, W, strides=[1, 3, 3, 1])
# Max Pooling
def maxpool2d(x, k=2):
return tf.nn.max_pool(
x,
ksize=[1, k, k, 1],
strides=[1, k, k, 1],
padding='SAME')
# The tf.nn.max_pool() function does exactly what you would expect,
# it performs max pooling with the ksize parameter as the size of the filter.
# Sticking the model together
def conv_net(x, weights, biases, dropout):
# Layer 1 - 28*28*1 to 14*14*32
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
conv1 = maxpool2d(conv1, k=2)
# Layer 2 - 14*14*32 to 7*7*64
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer - 7*7*64 to 1024
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]]) # The reshape step is to flatten the filter layers
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, dropout)
# Output Layer - class prediction - 1024 to 10
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# Run the session in tensorflow
# tf Graph input
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32)
# Model
logits = conv_net(x, weights, biases, keep_prob)
# Define loss and optimizer
cost = tf.reduce_mean(\
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\
.minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf. global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
for batch in range(mnist.train.num_examples//batch_size):
batch_x, batch_y = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={
x: batch_x,
y: batch_y,
keep_prob: dropout})
# Calculate batch loss and accuracy
loss = sess.run(cost, feed_dict={
x: batch_x,
y: batch_y,
keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={
x: mnist.validation.images[:test_valid_size],
y: mnist.validation.labels[:test_valid_size],
keep_prob: 1.})
print('Epoch {:>2}, Batch {:>3} -'
'Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
epoch + 1,
batch + 1,
loss,
valid_acc))
# Calculate Test Accuracy
test_acc = sess.run(accuracy, feed_dict={
x: mnist.test.images[:test_valid_size],
y: mnist.test.labels[:test_valid_size],
keep_prob: 1.})
print('Testing Accuracy: {}'.format(test_acc))
```
# LeNet Architecture
## Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
```
## Split up data into training, validation and test set
```
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
```
## Visualize Data
View a sample from the dataset.
You do not need to modify this section.
```
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
```
## Preprocess Data
Shuffle the training data.
You do not need to modify this section.
```
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
```
## Setup TensorFlow
The `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.
```
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
```
### Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
### Architecture
**Layer 1: Convolutional.** The output shape should be 28x28x6.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 14x14x6.
**Layer 2: Convolutional.** The output shape should be 10x10x16.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 5x5x16.
**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.
**Layer 3: Fully Connected.** This should have 120 outputs.
**Activation.** Your choice of activation function.
**Layer 4: Fully Connected.** This should have 84 outputs.
**Activation.** Your choice of activation function.
**Layer 5: Fully Connected (Logits).** This should have 10 outputs.
### Output
Return the result of the 2nd fully connected layer.
```
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 6])),
'wc2': tf.Variable(tf.random_normal([5, 5, 6, 16])),
'wd1': tf.Variable(tf.random_normal([400, 120])),
'wd2': tf.Variable(tf.random_normal([120, 84])),
'out': tf.Variable(tf.random_normal([84, 10]))}
biases = {
'bc1': tf.Variable(tf.random_normal([6])),
'bc2': tf.Variable(tf.random_normal([16])),
'bd1': tf.Variable(tf.random_normal([120])),
'bd2': tf.Variable(tf.random_normal([84])),
'out': tf.Variable(tf.random_normal([10]))}
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
layer1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding="VALID")
layer1 = tf.nn.bias_add(layer1, biases['bc1'])
# TODO: Activation.
layer1 = tf.nn.relu(layer1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
layer1 = tf.nn.max_pool(layer1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
# TODO: Layer 2: Convolutional. Output = 10x10x16.
layer2 = tf.nn.conv2d(layer1, weights['wc2'], strides=[1, 1, 1, 1], padding="VALID")
layer2 = tf.nn.bias_add(layer2, biases['bc2'])
# TODO: Activation.
layer2 = tf.nn.relu(layer2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
layer2 = tf.nn.max_pool(layer2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
# TODO: Flatten. Input = 5x5x16. Output = 400.
flattenedLayer2 = tf.contrib.layers.flatten(layer2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
layer3 = tf.add(tf.matmul(flattenedLayer2, weights['wd1']), biases['bd1'])
# TODO: Activation.
layer3 = tf.nn.relu(layer3)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
layer4 = tf.add(tf.matmul(layer3, weights['wd2']), biases['bd2'])
# TODO: Activation.
layer4 = tf.nn.relu(layer4)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
logits = tf.add(tf.matmul(layer4, weights['out']), biases['out'])
return logits
```
## Features and Labels
Train LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.
`x` is a placeholder for a batch of input images.
`y` is a placeholder for a batch of output labels.
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
```
## Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
```
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
```
## Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
```
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
## Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
```
## Evaluate the Model (on the test set)
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bhuwanupadhyay/codes/blob/main/ipynbs/reshape_demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install pydicom
# Import tensorflow
import logging
import tensorflow as tf
import keras.backend as K
# Helper libraries
import math
import numpy as np
import pandas as pd
import pydicom
import os
import sys
import time
# Imports for dataset manipulation
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
# Improve progress bar display
import tqdm
import tqdm.auto
tqdm.tqdm = tqdm.auto.tqdm
#tf.enable_eager_execution() #comment this out if causing errors
logger = tf.get_logger()
logger.setLevel(logging.DEBUG)
### SET MODEL CONFIGURATIONS ###
# Data Loading
CSV_PATH = 'label_data/CCC_clean.csv'
IMAGE_BASE_PATH = './data/'
test_size_percent = 0.15 # percent of total data reserved for testing
print(IMAGE_BASE_PATH)
# Data Augmentation
mirror_im = False
# Loss
lambda_coord = 5
epsilon = 0.00001
# Learning
step_size = 0.00001
BATCH_SIZE = 5
num_epochs = 1
# Saving
shape_path = 'trained_model/model_shape.json'
weight_path = 'trained_model/model_weights.h5'
# TensorBoard
tb_graph = False
tb_update_freq = 'batch'
### GET THE DATASET AND PREPROCESS IT ###
print("Loading and processing data\n")
data_frame = pd.read_csv(CSV_PATH)
"""
Construct numpy ndarrays from the loaded csv to use as training
and testing datasets.
"""
# zip all points for each image label together into a tuple
points = zip(data_frame['start_x'], data_frame['start_y'],
data_frame['end_x'], data_frame['end_y'])
img_paths = data_frame['imgPath']
def path_to_image(path):
"""
Load a matrix of pixel values from the DICOM image stored at the
input path.
@param path - string, relative path (from IMAGE_BASE_PATH) to
a DICOM file
@return image - numpy ndarray (int), 2D matrix of pixel
values of the image loaded from path
"""
# load image from path as numpy array
image = pydicom.dcmread(os.path.join(IMAGE_BASE_PATH, path)).pixel_array
return image
# normalize dicom image pixel values to 0-1 range
def normalize_image(img):
"""
Normalize the pixel values in img to be withing the range
of 0 to 1.
@param img - numpy ndarray, 2D matrix of pixel values
@return img - numpy ndarray (float), 2D matrix of pixel values, every
element is valued between 0 and 1 (inclusive)
"""
img = img.astype(np.float32)
img += abs(np.amin(img)) # account for negatives
img /= np.amax(img)
return img
# normalize the ground truth bounding box labels wrt image dimensions
def normalize_points(points):
"""
Normalize values in points to be within the range of 0 to 1.
@param points - 1x4 tuple, elements valued in the range of 0
512 (inclusive). This is known from the nature
of the dataset used in this program
@return - 1x4 numpy ndarray (float), elements valued in range
0 to 1 (inclusive)
"""
imDims = 512.0 # each image in our dataset is 512x512
points = list(points)
for i in range(len(points)):
points[i] /= imDims
return np.array(points).astype(np.float32)
"""
Convert the numpy array of paths to the DICOM images to pixel
matrices that have been normalized to a 0-1 range.
Also normalize the bounding box labels to make it easier for
the model to predict on them.
"""
# apply preprocessing functions
points = map(normalize_points, points)
imgs = map(path_to_image, img_paths)
imgs = map(normalize_image, imgs)
print(list(imgs))
# reshape input image data to 4D shape (as expected by the model)
# and cast all data to np arrays (just in case)
imgs = np.array(imgs)
points = np.array(points)
imgs = imgs.reshape((-1, 512, 512, 1))
```
| github_jupyter |
## 20 Sept 2019
<strong>RULES</strong><br>
<strong>Date:</strong> Level 2 heading ## <br>
<strong>Example Heading:</strong> Level 3 heading ###<br>
<strong>Method Heading:</strong> Level 4 heading ####
### References
1. [Forester_W._Isen;_J._Moura]_DSP_for_MATLAB_and_La Volume II(z-lib.org)
2. H. K. Dass, Advanced Engineering Mathematics
3. [Forester_W._Isen;_J._Moura]_DSP_for_MATLAB_and_La Volume I(z-lib.org)
4. [John_G_Proakis;_Dimitris_G_Manolakis]_Digital_Sig(z-lib.org)
### Imports
```
import numpy as np
from sympy import oo
import math
import sympy as sp
import matplotlib.pyplot as plt
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import display
from IPython.display import display_latex
from sympy import latex
import math
from scipy import signal
from datetime import datetime
```
### Setup
```
sp.init_printing(use_latex = True)
z, f, i = sp.symbols('z f i')
x, k = sp.symbols('x k')
```
### Methods
```
# Usage: display_equation('u_x', x)
def display_equation(idx, symObj):
if(isinstance(idx, str)):
eqn = '\\[' + idx + ' = ' + latex(symObj) + '\\]'
display_latex(eqn, raw=True)
else:
eqn = '\\[' + latex(idx) + ' = ' + latex(symObj) + '\\]'
display_latex(eqn, raw=True)
return
# Usage: display_full_latex('u_x')
def display_full_latex(idx):
if(isinstance(idx, str)):
eqn = '\\[' + idx + '\\]'
display_latex(eqn, raw=True)
else:
eqn = '\\[' + latex(idx) + '\\]'
display_latex(eqn, raw=True)
return
# Usage: display_full_latex('u_x')
def display_full_latex(idx):
if(isinstance(idx, str)):
eqn = '\\[' + idx + '\\]'
display_latex(eqn, raw=True)
else:
eqn = '\\[' + latex(idx) + '\\]'
display_latex(eqn, raw=True)
return
def ztrans(a, b):
F = sp.summation(f/z**k, ( k, a, b ))
return F
def display_ztrans(f, k, limits = (-4, 4)):
F = sp.summation(f/z**k, ( k, -oo, oo ))
display_equation('f(k)', f)
display_equation('F(k)_{\infty}', F)
F = sp.summation(f/z**k, (k, limits[0], limits[1]))
display_equation('F(k)_{'+ str(limits[0]) + ',' + str(limits[1]) + '}', F)
return
def sum_of_GP(a, r):
return sp.simplify(a/(1-r))
# Credit: https://www.dsprelated.com/showcode/244.php
def zplane(b,a,filename=None):
"""Plot the complex z-plane given a transfer function.
"""
# get a figure/plot
ax = plt.subplot(111)
# create the unit circle
uc = patches.Circle((0,0), radius=1, fill=False,
color='black', ls='dashed')
ax.add_patch(uc)
# The coefficients are less than 1, normalize the coeficients
if np.max(b) > 1:
kn = np.max(b)
b = b/float(kn)
else:
kn = 1
if np.max(a) > 1:
kd = np.max(a)
a = a/float(kd)
else:
kd = 1
# Get the poles and zeros
p = np.roots(a)
z = np.roots(b)
k = kn/float(kd)
# Plot the zeros and set marker properties
t1 = plt.plot(z.real, z.imag, 'go', ms=10)
plt.setp( t1, markersize=10.0, markeredgewidth=1.0,
markeredgecolor='k', markerfacecolor='g')
# Plot the poles and set marker properties
t2 = plt.plot(p.real, p.imag, 'rx', ms=10)
plt.setp( t2, markersize=12.0, markeredgewidth=3.0,
markeredgecolor='b', markerfacecolor='b')
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# set the ticks
r = 1.5; plt.axis('scaled'); plt.axis([-r, r, -r, r])
ticks = [-1, -.5, .5, 1]; plt.xticks(ticks); plt.yticks(ticks)
if filename is None:
plt.show()
else:
plt.savefig(filename)
return z, p, k
```
### Z Transform
```
display_full_latex('X(z) = \sum_{-\infty}^{\infty} x[n]z^{-n}')
```
### Tests
#### Convert Symbolic to Numeric
```
f = x**2
f = sp.lambdify(x, f, 'numpy')
f(2)
display_equation('f(x)', sp.summation(3**k, ( k, -oo, oo )))
display_equation('F(z)', sp.summation(3**k/z**k, ( k, -oo, oo )))
```
#### Partial Fractions
```
f = 1/(x**2 + x - 6)
display_equation('f(x)', f)
f = sp.apart(f)
display_equation('f(x)_{canonical}', f)
```
#### Piecewise
```
f1 = 5**k
f2 = 3**k
f = sp.Piecewise((f1, k < 0), (f2, k >= 0))
display_equation('f(k)', f)
```
## 21 Sept 2019
#### Positive Time / Causal
```
f1 = k **2
f2 = 3**k
f = f1 * sp.Heaviside(k)
# or
#f = sp.Piecewise((0, k < 0), (f1, k >= 0))
display_equation('f(k)', f)
sp.plot(f, (k, -10, 10))
```
#### Stem Plot
```
x = np.linspace(0.1, 2 * np.pi, 41)
y = np.sin(x)
plt.stem(x, y)
plt.show()
```
#### zplane Plot
```
b = np.array([1, 1, 0, 0])
a = np.array([1, 1, 1])
zplane(b,a)
```
### Filter
```
g = (1 + z**-2)/(1-1.2*z**-1+0.81*z**-2)
display_equation('F(z)', g)
b = np.array([1,1])
a = np.array([1,-1.2,0.81])
x = np.ones((1, 8))
# Response
y = signal.lfilter(b, a, x)
# Reverse
signal.lfilter(a, b, y)
```
### [1] Example 2.2
```
radFreq = np.arange(0, 2*np.pi, 2*np.pi/499)
g = np.exp(1j*radFreq)
Zxform= 1/(1-0.7*g**(-1))
plt.plot(radFreq/np.pi,abs(Zxform))
plt.title('Graph')
plt.xlabel('Frequency, Units of π')
plt.ylabel('H(x)')
plt.grid(True)
plt.show()
```
### [2] Chapter 19, Example 5
```
f = 3**(-k)
display_ztrans(f, k, (-4, 3))
```
### [2] Example 9
```
f1 = 5**k
f2 = 3**k
f = sp.Piecewise((f1, k < 0), (f2, k >= 0))
display_ztrans(f, k, (-3, 3))
p = sum_of_GP(z/5, z/5)
q = sum_of_GP(1, 3/z)
display_equation('F(z)', sp.ratsimp(q + p))
```
## 28 Sept, 2019
### [3] Folding formula
fperceived = [ f - fsampling * NINT( f / fsampling ) ]
## 9 Oct, 2019
### [3] Section 4.3
### Equations
```
display_full_latex('F \\rightarrow analog')
display_full_latex('f \\rightarrow discrete')
display_full_latex('Nyquist frequency = F_s')
display_full_latex('Folding frequency = \\frac{F_s}{2}')
display_full_latex('F_{max} = \\frac{F_s}{2}')
display_full_latex('T = \\frac{1}{F_s}')
display_full_latex('f = \\frac{F}{F_s}')
display_full_latex('f_k = \\frac{k}{N}')
display_full_latex('F_k = F_0 + kF_s, k = \\pm 1, \\pm 2, ...')
display_full_latex('x_a(t) = Asin(2\\pi Ft + \\theta)')
display_full_latex('x(n) = Asin(\\frac{2\\pi nk}{N} + \\theta)')
display_full_latex('x(n) = Asin(2\\pi fn + \\theta)')
display_full_latex('x(n) = x_a (nT) = Acos(2\\pi \\frac{F_0 + kF_s}{F_s} n + \\theta)')
display_full_latex('t = nT')
display_full_latex('\\Omega = 2\\pi F')
display_full_latex('\\omega = 2\\pi f')
display_full_latex('\\omega = \\Omega T')
display_full_latex('x_q(n) = Q[x(n)]')
display_full_latex('e_q(n) = x_q(n) - x(n)')
display_full_latex('Interpolation function, g(t) = \\frac{sin2\\pi Bt}{2\\pi Bt}')
display_full_latex('x_a(t) = \\sum^\\infty _{n = - \\infty} x_a(\\frac{n}{F_s}).g(t - \\frac{n}{F_s})')
display_full_latex('\\Delta = \\frac{x_{max} - x_{min}}{L-1}, where L = Number of quantization levels')
display_full_latex('-\\frac{\\Delta}{2} \\leq e_q(n) \\leq \\frac{\\Delta}{2}')
display_full_latex('b \\geq log_2 L')
display_full_latex('SQNR = \\frac{3}{2}.2^{2b}')
display_full_latex('SQNR(dB) = 10log_{10}SQNR = 1.76 + 6.02b')
x = np.arange(0, 10, 1)
y = np.power(0.9, x) * np.heaviside(np.power(0.9, x), 1)
display_full_latex('x_a(t) = 0.9^t')
display_full_latex('x(n) = 0.9^n')
plt.stem(x, y)
plt.plot(x, y, 'g-')
plt.xticks(np.arange(0, 10, 1))
plt.yticks(np.arange(0, 1.2, 0.1))
plt.xlabel('n')
plt.ylabel('x(n)')
plt.grid(True)
plt.show()
```
## 14 Oct, 2019
```
n = sp.symbols('n')
x = np.arange(0, 10, 1)
y = x * np.heaviside(x, 1)
f = sp.Piecewise((0, n < 0), (n, n >= 0))
display_equation('u_r(n)', f)
plt.stem(x, y)
plt.plot(x, y, 'g-')
plt.xticks(np.arange(0, 10, 1))
plt.yticks(np.arange(0, 10, 1))
plt.xlabel('n')
plt.ylabel('x(n)')
plt.grid(True)
plt.show()
display_full_latex('E = \\sum^\\infty _{n = -\\infty} x|(n)|^2')
display_full_latex('P = \\lim_{N \\rightarrow \\infty} \\frac{1}{2N + 1} \\sum^ N _{n = -N} x|(n)|^2')
```
## 16 Oct, 2019
#### General form of the input-output relationships
```
display_full_latex('y(n) = -\\sum^N _{k = 1}a_k y(n-k) + \\sum^M _{k = 0}b_k x(n-k)')
```
### [4] Example 3.2
```
h = np.array([1, 2, 1, -1])
x = np.array([1, 2, 3, 1])
y = np.convolve(h, x, mode='full')
#y = signal.convolve(h, x, mode='full', method='auto')
print(y)
fig, (ax_orig, ax_h, ax_x) = plt.subplots(3, 1, sharex=True)
ax_orig.plot(h)
ax_orig.set_title('Impulse Response')
ax_orig.margins(0, 0.1)
ax_h.plot(x)
ax_h.set_title('Input Signal')
ax_h.margins(0, 0.1)
ax_x.plot(y)
ax_x.set_title('Output')
ax_x.margins(0, 0.1)
fig.tight_layout()
fig.show()
```
## 17 Oct, 2019
### Sum of an AP with common ratio r and first term a, starting from the zeroth term
```
a, r = sp.symbols('a r')
s = sp.summation(a*r**k, ( k, 0, n ))
display_equation('S_n', s)
```
### Sum of positive powers of a
```
a = sp.symbols('a')
s = sp.summation(a**k, ( k, 0, n ))
display_equation('S_n', s)
```
### [3] 4.12.3 Single Pole IIR
```
SR = 24
b = 1
p = 0.8
y = np.zeros((1, SR)).ravel()
x = np.zeros((1, SR + 1)).ravel()
x[0] = 1
y[0] = b * x[0]
for n in range(1, SR):
y[n] = b * x[n] + p * y[n - 1]
plt.stem(y)
```
### Copying the method above for [4] 4.1 Averaging
```
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y[0] = b * x[0]
for n in range(1, len(x)):
y[n] = (n/(n + 1)) * y[n - 1] + (1/(n + 1)) * x[n]
print(y[n], '\n')
```
### My Recursive Averaging Implementation
```
def avg(x, n):
if (n < 0):
return 0
else:
return (n/(n + 1)) * avg(x, n - 1) + (1/(n + 1)) * x[n]
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
average = avg(x, len(x) - 1)
print(average)
```
### Performance Comparism
```
from timeit import timeit
code_rec = '''
import numpy as np
def avg(x, n):
if (n < 0):
return 0
else:
return (n/(n + 1)) * avg(x, n - 1) + (1/(n + 1)) * x[n]
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
average = avg(x, len(x) - 1)
'''
code_py = '''
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
average = sum(x, len(x) - 1) / len(x)
'''
code_loop = '''
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
sum = 0
for i in x:
sum += i
average = sum/len(x)
'''
running_time_rec = timeit(code_rec, number = 100) / 100
running_time_py = timeit(code_py, number = 100) / 100
running_time_loop = timeit(code_loop, number = 100) / 100
print("Running time using my recursive average function: \n",running_time_rec, '\n')
print("Running time using python sum function: \n",running_time_py)
print("Running time using loop python function: \n",running_time_loop)
```
### [4] Example 4.1
```
def rec_sqrt(x, n):
if (n == -1):
return 1
else:
return (1/2) * (rec_sqrt(x, n - 1) + (x[n]/rec_sqrt(x, n - 1)))
A = 2
x = np.ones((1, 5)).ravel() * A
print(rec_sqrt(x, len(x) - 1))
b = np.array([1, 1, 1, 1, 1])
a = np.array([1, 0, 0])
zplane(b,a)
```
| github_jupyter |
# langages de script – Python
## Modules et packages
### M1 Ingénierie Multilingue – INaLCO
[email protected]
Les modules et les packages permettent d'ajouter des fonctionnalités à Python
Un module est un fichier (```.py```) qui contient des fonctions et/ou des classes.
<small>Et de la documentation bien sûr</small>
Un package est un répertoire contenant des modules et des sous-répertoires.
C'est aussi simple que ça. Évidemment en rentrant dans le détail c'est un peu plus compliqué.
## Un module
```
%%file operations.py
# -*- coding: utf-8 -*-
"""
Module pour le cours sur les modules
Opérations arithmétiques
"""
def addition(a, b):
""" Ben une addition quoi : a + b """
return a + b
def soustraction(a, b):
""" Une soustraction : a - b """
return a - b
```
Pour l'utiliser on peut :
* l'importer par son nom
```
import operations
operations.addition(2, 4)
```
* l'importer et modifier son nom
```
import operations as op
op.addition(2, 4)
```
* importer une partie du module
```
from operations import addition
addition(2, 4)
```
* importer l'intégralité du module
```
from operations import *
addition(2, 4)
soustraction(4, 2)
```
En réalité seules les fonctions et/ou les classes ne commençant pas par '_' sont importées.
L'utilisation de `import *` n'est pas recommandée. Parce que, comme vous le savez « *explicit is better than implicit* ». Et en ajoutant les fonctions dans l'espace de nommage du script vous pouvez écraser des fonctions existantes.
Ajoutez une fonction `print` à votre module pour voir (attention un module n'est chargé qu'une fois, vous devrez relancer le kernel ou passer par la console).
Autre définition d'un module : c'est un objet de type ``module``.
```
import operations
type(operations)
```
``import`` ajoute des attributs au module
```
import operations
print(f"name : {operations.__name__}")
print(f"file : {operations.__file__}")
print(f"doc : {operations.__doc__}")
```
## Un package
```
! tree operations_pack
```
Un package python peut contenir des modules, des répertoires et sous-répertoires, et bien souvent du non-python : de la doc html, des données pour les tests, etc…
Le répertoire principal et les répertoires contenant des modules python doivent contenir un fichier `__init__.py`
`__init__.py` peut être vide, contenir du code d'initialisation ou contenir la variable `__all__`
```
import operations_pack.simple
operations_pack.simple.addition(2, 4)
from operations_pack import simple
simple.soustraction(4, 2)
```
``__all__`` dans ``__init__.py`` définit quels seront les modules qui seront importés avec ``import *``
```
from operations_pack.avance import *
multi.multiplication(2,4)
```
## Où sont les modules et les packages ?
Pour que ``import`` fonctionne il faut que les modules soient dans le PATH.
```
import sys
sys.path
```
``sys.path`` est une liste, vous pouvez la modifier
```
sys.path.append("[...]") # le chemin vers le dossier operations_pack
sys.path
```
## Installer des modules et des packages
Dans les distributions Python récentes `pip` est installé, tant mieux.
Avec `pip` vous pouvez :
* installer un module `pip install module` ou `pip install --user module`
`pip` va trouver le module sur Pypi et l'installer au bon endroit s'il existe. Il installera les dépendances aussi.
* désinstaller un module `pip uninstall module`
* mettre à jour `pip install module --upgrade`
* downgrader dans une version particulière `pip install module=0.9 --upgrade`
* sauvegarder votre environnement de dév, la liste de vos modules `pip freeze > requirements.txt`
Ce qui vous permettra de le réinstaller sur une autre machine `pip install -r requirements.txt`
## S'en sortir avec les versions
Python évolue au fil des versions, les packages aussi. Ça peut poser des problèmes quand vous voulez partager votre code ou même quand vous voulez utiliser un code qui a besoin d'une version particulière.
Il existe un outil pour isoler les environnement de développement : ``virtualenv``
``virtualenv /path/mon_projet`` ou ``python3 -m venv /path/mon_projet`` va créer un dossier avec plein de trucs dedans, y compris un interpréteur python.
Vous pouvez spécifier la version de python avec ``virtualenv /path/mon_projet -p /usr/bin/python3.6``
Pour activer l'environnement : ``source /path/mon_projet/bin/activate`` (``/path/mon_projet/Scripts/activate.bat`` sous Windows (je crois))
Pour en sortir : ``deactivate``
Quand vous travaillez dans un venv les modules que vous installerez avec pip seront isolés dans le venv et pas ailleurs.
Si vous utilisez ``python`` ce sera la version de l'interpréteur du venv et les modules du venv.
Avec cet outil on doit installer à chaque fois les modules désirés mais au moins on ne s'embrouille pas. Et vous pouvez communiquer un fichier ``requirements.txt`` à un collègue qui pourra reproduire le venv sur sa machine.
Il existe aussi ``pipenv``, un outil plus récent qui combine ``pip`` et ``virtualenv``.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
EXPERIMENT = 'bivariate_power'
TAG = ''
df = pd.read_csv(f'./results/{EXPERIMENT}_results{TAG}.csv', sep=', ', engine='python')
plot_df = df
x_var_rename_dict = {
'sample_size': '# Samples',
'Number of environments': '# Environments',
'Fraction of shifting mechanisms': 'Shift fraction',
'dag_density': 'Edge density',
'n_variables': '# Variables',
}
plot_df = df.rename(
x_var_rename_dict, axis=1
).rename(
{'Method': 'Test', 'Soft': 'Score'}, axis=1
).replace(
{
'er': 'Erdos-Renyi',
'ba': 'Hub',
'PC (pool all)': 'Full PC (oracle)',
'Full PC (KCI)': r'Pooled PC (KCI) [25]',
'Min changes (oracle)': 'MSS (oracle)',
'Min changes (KCI)': 'MSS (KCI)',
'Min changes (GAM)': 'MSS (GAM)',
'Min changes (Linear)': 'MSS (Linear)',
'Min changes (FisherZ)': 'MSS (FisherZ)',
'MC': r'MC [11]',
False: 'Hard',
True: 'Soft',
}
)
plot_df = plot_df.loc[
(~plot_df['Test'].isin(['Full PC (oracle)', 'MSS (oracle)'])) &
(plot_df['# Environments'] == 2) &
(plot_df['Score'] == 'Hard')
]
plot_df = plot_df.replace({
'[[];[0]]': 'P(X1)',
'[[];[1]]': 'P(X2|X1)',
'[[];[]]': 'Neither',
'[[];[0;1]]': 'Both',
})
plot_df['Test'].unique()
intv_targets = ['P(X1)', 'P(X2|X1)', 'Neither', 'Both']
ax_var = 'intervention_targets'
for targets in intv_targets:
display(plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index().head(3))
sns.set_context('paper')
fig, axes = plt.subplots(1, 4, sharey=True, sharex=True, figsize=(7.5, 2.5))
intv_targets = ['P(X1)', 'P(X2|X1)', 'Neither', 'Both']
ax_var = 'intervention_targets'
x_var = 'Precision' # 'False orientation rate' #
y_var = 'Recall' # 'True orientation rate'#
hue = 'Test'
for targets, ax in zip(intv_targets, axes.flatten()):
mean_df = plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index()
std_df = plot_df[plot_df[ax_var] == targets].groupby('Test')[['Precision', 'Recall']].std().reset_index()
std_df.rename(
{'Precision': 'Precision std', 'Recall': 'Recall std'}, axis=1
)
g = sns.scatterplot(
data=plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index(),
x=x_var,
y=y_var,
hue=hue,
ax=ax,
# markers=['d', 'P', 's'],
palette=[
sns.color_palette("tab10")[i]
for i in [2, 3, 4, 5, 7, 6] # 3, 4, 5,
],
hue_order=[
'MSS (KCI)',
'MSS (GAM)',
'MSS (FisherZ)',
'MSS (Linear)',
'Pooled PC (KCI) [25]',
'MC [11]',
],
legend='full',
# alpha=1,
s=100
)
# ax.axvline(0.05, ls=':', c='grey')
ax.set_title(f'Shift in {targets}')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
for ax in axes[:-1]:
ax.get_legend().remove()
# ax.set_ylim([0, 1])
# ax.set_xlim([0, 1])
plt.tight_layout()
plt.savefig('./figures/bivariate_power_plots.pdf')
plt.show()
```
| github_jupyter |
```
### Python's Dir Function ###
def attributes_and_methods(inp):
print("The Attributes and Methods of a {} are:".format(type(inp)))
print(dir(inp))
# Change x to be any different type
# to get different results for that data type
x = 'abc'
attributes_and_methods(x)
### Contextmanager Example ###
# one way to write code that can use Python's "with" statement
# print statements have been added to show the order of operations
from contextlib import contextmanager
# add the decorator @contextmanager to a function of your own
@contextmanager
def managed_function():
print("2.) Inside function")
# put a "try: except: else: finally" block inside your function
try:
print("3.) Inside 'try' block")
# yield whatever it is you'd like to work with within your "with" statement block
var = "abc"
print("4.) Leaving function via 'yield'")
yield var
# by using the 'yield' command this function is technically a 'generator'
print("8.) Next line after yield is run")
except:
print("Inside except block")
# put any code here you want to run in the event there is an error in the 'try' block
pass
else:
print("9.) Inside else block")
# put any code here you want to run in the event there is NOT an error in the 'try' block
pass
finally:
print("10.) Inside finally block")
# put the clean up code. The reason you would want to use a contextmanager is to have something opened and closed for you just by using the 'with' statement in the rest of your code. That which goes in the 'finally' block will run no matter what happens in the 'try' block
del var
print("1.) Starting now")
with managed_function() as mf:
print("5.) Outside function")
print("6.)", mf)
print("7.) with block finished with no errors, going back into the function now at the line after 'yield'")
### Comprehension With Functions and Classes ###
def multiply_by_2(a):
return a * 2
class Simple(object):
def __init__(self, my_string):
self.my_string = str(my_string)
def result(self):
return f'Hi, {self.my_string}!'
list_comp1 = [multiply_by_2(x) for x in range(5)]
print(list_comp1)
name_list = ['Bill', 'Joe', 'Steve']
list_comp2 = [Simple(name).result() for name in name_list]
print(list_comp2)
### Extending Builtin Types ###
first = {'a': 1, 'b': 2, 'c': 3}
second = {'d': 4, 'e': 5, 'f': 6}
try:
result = first + second
print(result)
except TypeError:
print("Can't add dicts the normal way\n")
print('But you can inherit from builtin dict in order to extend it\n')
class my_dict(dict):
def __init__(self, d):
print('\tCreating new object now...')
self.d = d
pass
def __add__(self, other):
print('\tAdding now...')
result = {}
for entry in self.d:
result[entry] = self.d[entry]
for entry in other.d:
result[entry] = other.d[entry]
return result
print('Instantiate new objects:')
first = my_dict(first)
second = my_dict(second)
print('\nCalling new dunder operator method...')
result = first + second
print(result)
### Fuzzy Lookup ###
# cutoff defaults to 0.6 matches
from difflib import get_close_matches
input_list = [
'Happy',
'Sad',
'Angry',
'Elated',
'Upset'
]
check_for = 'Happiness'
result = get_close_matches(check_for,
input_list)
print(result, "# Can't find it?")
check_for = 'Happiness'
result = get_close_matches(check_for,
input_list,
cutoff=0.4)
print(result, "# Lower the cutoff")
check_for = 'Sadness'
result = get_close_matches(check_for,
input_list)
print()
print(result)
check_for = 'Anger'
result = get_close_matches(check_for,
input_list)
print(result)
check_for = 'Elation'
result = get_close_matches(check_for,
input_list)
print(result)
print()
check_for = 'Setup'
result = get_close_matches(check_for,
input_list)
print(result, "# Can't find it?")
check_for = 'Setup'
result = get_close_matches(check_for,
input_list,
cutoff=0.4)
print(result, "# Lower the cutoff")
### Logging Example ###
import logging
logging.basicConfig(
filename="program.log",
level=logging.DEBUG
)
logging.warning("testing 1213")
logging.debug("debug line here")
def function(a,b):
logging.info(f"{a}-{b}")
function(3,4)
with open("program.log") as p:
lines = p.readlines()
print(lines)
### Mix And Match ###
"""
This is meant to serve as a quick
example of some different ways provided
in the standard library to mix and match
your data
"""
from itertools import combinations, permutations, product
my_list = ["a", "b", "c", "d"]
print("Input list =", my_list)
print()
# Combinations
print("itertools.combinations with Length = 2")
for combo in combinations(my_list, 2):
print(combo)
print()
print("itertools.combinations - Length = 3")
for combo in combinations(my_list, 3):
print(combo)
print()
# Permutations
print("itertools.permutations with Length = 2")
for perm in permutations(my_list, 2):
print(perm)
print()
print("itertools.permutations with Length = 3")
for perm in permutations(my_list, 3):
print(perm)
print()
# Product
print("itertools.product with `repeat` = 2")
for prod in product(my_list, repeat=2):
print(prod)
print()
print("itertools.product with `repeat` = 3")
for prod in product(my_list, repeat=3):
print(prod)
print()
print(
"""
Bottom line:
combinations -> all variations where order doesn't matter
permutations -> all variations where order matters
product -> all variations where order matters AND you can replace each list item once for every additional item desired in the final iterables
"""
)
### Simple Regex With Comments ###
import re
pattern = (
"^" # at the start of the line
"[A-Z]+" # find 1 or more capital letters
"-" # followed by a dash
"[0-9]{2}" # and 2 numbers
)
checklist = [
"ERS-87", # match
"DJHDJJ-55", # match
"abbjd-44", # no match(undercase)
"DFT-1", # no match(not enough #s)
]
for item in checklist:
if re.match(pattern, item):
print('"{}"'.format(item), "Matched!")
else:
print('"{}"'.format(item), "Did not match...")
### Sorting Integers Stored As Strings ###
my_list = ['1', '5', '10', '15', '20']
# A straight sort of the list will give different results than you might have expected
sorted_list = sorted(my_list)
print("Without key:\n", sorted_list)
print()
# Using the 'key' argument in sorted() allows you to specify that even though this is a list of strings technically, there are integers inside the strings and to sort as if they were integers, in numerical order
other_sorted_list = sorted(my_list, key=int)
print("With key:\n", other_sorted_list)
### View The Bytecode You'Re Creating ###
from dis import dis
def test():
variable_1 = 1 + 1
variable_2 = 5
variable_3 = variable_1 + variable_2
print("Answer here", variable_3)
dis(test)
print('----------------------')
test()
```
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Working with Watson Machine Learning
This notebook should be run in a Watson Studio project, using **Default Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following Cloud services:
* Watson OpenScale
* Watson Machine Learning
If you have a paid Cloud account, you may also provision a **Databases for PostgreSQL** or **Db2 Warehouse** service to take full advantage of integration with Watson Studio and continuous learning services. If you choose not to provision this paid service, you can use the free internal PostgreSQL storage with OpenScale, but will not be able to configure continuous learning for your model.
The notebook will train, create and deploy a House Price regression model, configure OpenScale to monitor that deployment in the OpenScale Insights dashboard.
### Contents
- [Setup](#setup)
- [Model building and deployment](#model)
- [OpenScale configuration](#openscale)
- [Quality monitor and feedback logging](#quality)
- [Fairness, drift monitoring and explanations](#fairness)
# Setup <a name="setup"></a>
## Package installation
```
import warnings
warnings.filterwarnings('ignore')
!rm -rf /home/spark/shared/user-libs/python3.7*
!pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1
!pip install --upgrade requests==2.23 --no-cache | tail -n 1
!pip install --upgrade numpy==1.20.3 --user --no-cache | tail -n 1
!pip install SciPy --no-cache | tail -n 1
!pip install lime --no-cache | tail -n 1
!pip install --upgrade ibm-watson-machine-learning --user | tail -n 1
!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1
!pip install --upgrade xgboost==1.3.3 --no-cache | tail -n 1
```
## Provision services and configure credentials
If you have not already, provision an instance of IBM Watson OpenScale using the [OpenScale link in the Cloud catalog](https://cloud.ibm.com/catalog/services/watson-openscale).
Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.
**NOTE:** You can also get OpenScale `API_KEY` using IBM CLOUD CLI.
How to install IBM Cloud (bluemix) console: [instruction](https://console.bluemix.net/docs/cli/reference/ibmcloud/download_cli.html#install_use)
How to get api key using console:
```
bx login --sso
bx iam api-key-create 'my_key'
```
```
CLOUD_API_KEY = "***"
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
```
If you have not already, provision an instance of IBM Watson OpenScale using the [OpenScale link in the Cloud catalog](https://cloud.ibm.com/catalog/services/watson-openscale).
Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key, generate an IAM token using that key and paste it below.
### WML credentials example with API key
```
WML_CREDENTIALS = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": CLOUD_API_KEY
}
```
### WML credentials example using IAM_token
**NOTE**: If IAM_TOKEN is used for authentication and you receive unauthorized/expired token error at any steps, please create a new token and reinitiate clients authentication.
```
# #uncomment this cell if want to use IAM_TOKEN
# import requests
# def generate_access_token():
# headers={}
# headers["Content-Type"] = "application/x-www-form-urlencoded"
# headers["Accept"] = "application/json"
# auth = HTTPBasicAuth("bx", "bx")
# data = {
# "grant_type": "urn:ibm:params:oauth:grant-type:apikey",
# "apikey": CLOUD_API_KEY
# }
# response = requests.post(IAM_URL, data=data, headers=headers, auth=auth)
# json_data = response.json()
# iam_access_token = json_data['access_token']
# return iam_access_token
#uncomment this cell if want to use IAM_TOKEN
# IAM_TOKEN = generate_access_token()
# WML_CREDENTIALS = {
# "url": "https://us-south.ml.cloud.ibm.com",
# "token": IAM_TOKEN
# }
```
### Cloud object storage details
In next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit [getting started with COS tutorial](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started).
You can find `COS_API_KEY_ID` and `COS_RESOURCE_CRN` variables in **_Service Credentials_** in menu of your COS instance. Used COS Service Credentials must be created with _Role_ parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.
`COS_ENDPOINT` variable can be found in **_Endpoint_** field of the menu.
```
COS_API_KEY_ID = "***"
COS_RESOURCE_CRN = "***" # eg "crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003abfb5d29761c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::"
COS_ENDPOINT = "***" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints
BUCKET_NAME = "***"
training_data_file_name="house_price_regression.csv"
```
This tutorial can use Databases for PostgreSQL, Db2 Warehouse, or a free internal verison of PostgreSQL to create a datamart for OpenScale.
If you have previously configured OpenScale, it will use your existing datamart, and not interfere with any models you are currently monitoring. Do not update the cell below.
If you do not have a paid Cloud account or would prefer not to provision this paid service, you may use the free internal PostgreSQL service with OpenScale. Do not update the cell below.
To provision a new instance of Db2 Warehouse, locate [Db2 Warehouse in the Cloud catalog](https://cloud.ibm.com/catalog/services/db2-warehouse), give your service a name, and click **Create**. Once your instance is created, click the **Service Credentials** link on the left side of the screen. Click the **New credential** button, give your credentials a name, and click **Add**. Your new credentials can be accessed by clicking the **View credentials** button. Copy and paste your Db2 Warehouse credentials into the cell below.
To provision a new instance of Databases for PostgreSQL, locate [Databases for PostgreSQL in the Cloud catalog](https://cloud.ibm.com/catalog/services/databases-for-postgresql), give your service a name, and click **Create**. Once your instance is created, click the **Service Credentials** link on the left side of the screen. Click the **New credential** button, give your credentials a name, and click **Add**. Your new credentials can be accessed by clicking the **View credentials** button. Copy and paste your Databases for PostgreSQL credentials into the cell below.
```
DB_CREDENTIALS = None
#DB_CREDENTIALS= {"hostname":"","username":"","password":"","database":"","port":"","ssl":True,"sslmode":"","certificate_base64":""}
KEEP_MY_INTERNAL_POSTGRES = True
```
## Run the notebook
At this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells.
# Model building and deployment <a name="model"></a>
In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service.
## Load the training data from github
```
!rm house_price_regression.csv
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/house_price_regression.csv
import pandas as pd
import numpy as np
pd_data = pd.read_csv("house_price_regression.csv")
pd_data.head()
```
## Explore data
## Save training data to Cloud Object Storage
```
import ibm_boto3
from ibm_botocore.client import Config, ClientError
cos_client = ibm_boto3.resource("s3",
ibm_api_key_id=COS_API_KEY_ID,
ibm_service_instance_id=COS_RESOURCE_CRN,
ibm_auth_endpoint="https://iam.bluemix.net/oidc/token",
config=Config(signature_version="oauth"),
endpoint_url=COS_ENDPOINT
)
with open(training_data_file_name, "rb") as file_data:
cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj(
Fileobj=file_data
)
```
## Create a model
```
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
pd_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
label = pd_data.SalePrice
feature_data = pd_data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object'])
train_X, test_X, train_y, test_y = train_test_split(feature_data.values, label.values, test_size=0.25)
my_imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
train_X = my_imputer.fit_transform(train_X)
test_X = my_imputer.transform(test_X)
from xgboost import XGBRegressor
from sklearn.compose import ColumnTransformer
model=XGBRegressor()
model.fit(train_X, train_y, eval_metric=['error'],
eval_set=[(test_X, test_y)], verbose=False)
# make predictions
predictions = model.predict(test_X)
from sklearn.metrics import mean_absolute_error
print("Mean Absolute Error : " + str(mean_absolute_error(predictions, test_y)))
```
### wrap xgboost with scikit pipeline
```
from sklearn.pipeline import Pipeline
xgb_model_imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
pipeline = Pipeline(steps=[('Imputer', xgb_model_imputer), ('xgb', model)])
model_xgb=pipeline.fit(train_X, train_y)
# make predictions
predictions = model_xgb.predict(test_X)
from sklearn.metrics import mean_absolute_error
print("Mean Absolute Error : " + str(mean_absolute_error(predictions, test_y)))
```
## Publish the model
```
import json
from ibm_watson_machine_learning import APIClient
wml_client = APIClient(WML_CREDENTIALS)
wml_client.version
```
### Listing all the available spaces
```
wml_client.spaces.list(limit=10)
WML_SPACE_ID='***' # use space id here
wml_client.set.default_space(WML_SPACE_ID)
```
### Remove existing model and deployment
```
MODEL_NAME="house_price_xgbregression"
DEPLOYMENT_NAME="house_price_xgbregression_deployment"
deployments_list = wml_client.deployments.get_details()
for deployment in deployments_list["resources"]:
model_id = deployment["entity"]["asset"]["id"]
deployment_id = deployment["metadata"]["id"]
if deployment["metadata"]["name"] == DEPLOYMENT_NAME:
print("Deleting deployment id", deployment_id)
wml_client.deployments.delete(deployment_id)
print("Deleting model id", model_id)
wml_client.repository.delete(model_id)
wml_client.repository.list_models()
training_data_references = [
{
"id": "product line",
"type": "s3",
"connection": {
"access_key_id": COS_API_KEY_ID,
"endpoint_url": COS_ENDPOINT,
"resource_instance_id":COS_RESOURCE_CRN
},
"location": {
"bucket": BUCKET_NAME,
"path": training_data_file_name,
}
}
]
#Note if there is specification related exception or specification ID is None then use "default_py3.8" instead of "default_py3.7_opence"
software_spec_uid = wml_client.software_specifications.get_id_by_name("default_py3.7_opence")
print("Software Specification ID: {}".format(software_spec_uid))
model_props = {
wml_client._models.ConfigurationMetaNames.NAME:"{}".format(MODEL_NAME),
wml_client._models.ConfigurationMetaNames.TYPE: "scikit-learn_0.23",
wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,
wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references,
wml_client._models.ConfigurationMetaNames.LABEL_FIELD: "SalePrice",
}
print("Storing model ...")
published_model_details = wml_client.repository.store_model(
model=model_xgb,
meta_props=model_props,
training_data=feature_data,
training_target=label
)
model_uid = wml_client.repository.get_model_uid(published_model_details)
print("Done")
print("Model ID: {}".format(model_uid))
```
## Deploy the model
The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions.
```
deployment_details = wml_client.deployments.create(
model_uid,
meta_props={
wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(DEPLOYMENT_NAME),
wml_client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
scoring_url = wml_client.deployments.get_scoring_href(deployment_details)
deployment_uid=wml_client.deployments.get_uid(deployment_details)
print("Scoring URL:" + scoring_url)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
```
## Sample scoring
```
fields = feature_data.columns.tolist()
values = [
test_X[0].tolist()
]
scoring_payload = {"input_data": [{"fields": fields, "values": values}]}
scoring_payload
scoring_response = wml_client.deployments.score(deployment_uid, scoring_payload)
scoring_response
```
# Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator,BearerTokenAuthenticator
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = IAMAuthenticator(apikey=CLOUD_API_KEY)
wos_client = APIClient(authenticator=authenticator)
wos_client.version
```
## Create schema and datamart
### Set up datamart
Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were **not** supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there **unless** there is an existing datamart **and** the **KEEP_MY_INTERNAL_POSTGRES** variable is set to **True**. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.
Prior instances of the House price model will be removed from OpenScale monitoring.
```
wos_client.data_marts.show()
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.POSTGRESQL,
credentials=PrimaryStorageCredentialsLong(
hostname=DB_CREDENTIALS['hostname'],
username=DB_CREDENTIALS['username'],
password=DB_CREDENTIALS['password'],
db=DB_CREDENTIALS['database'],
port=DB_CREDENTIALS['port'],
ssl=True,
sslmode=DB_CREDENTIALS['sslmode'],
certificate_base64=DB_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
```
### Remove existing service provider connected with used WML instance.
Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one.
```
SERVICE_PROVIDER_NAME = "xgboost_WML V2"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
```
## Add service provider
Watson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.
**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`.
```
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.WATSON_MACHINE_LEARNING,
deployment_space_id = WML_SPACE_ID,
operational_space_id = "production",
credentials=WMLCredentialsCloud(
apikey=CLOUD_API_KEY, ## use `apikey=IAM_TOKEN` if using IAM_TOKEN to initiate client
url=WML_CREDENTIALS["url"],
instance_id=None
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
wos_client.service_providers.show()
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id,deployment_id=deployment_uid, deployment_space_id = WML_SPACE_ID).result['resources'][0]
asset_deployment_details
model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=deployment_uid,deployment_space_id=WML_SPACE_ID)
model_asset_details_from_deployment
```
## Subscriptions
### Remove existing House price model subscriptions
This code removes previous subscriptions to the House price model to refresh the monitors with the new model and new data.
```
wos_client.subscriptions.show()
```
This code removes previous subscriptions to the House price model to refresh the monitors with the new model and new data.
```
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == model_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', sub_model_id)
```
This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.
### This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.
```
feature_cols=feature_data.columns.tolist()
#categorical_cols=X.select_dtypes(include=['object']).columns
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import ScoringEndpointRequest
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.REGRESSION
),
deployment=AssetDeploymentRequest(
deployment_id=asset_deployment_details['metadata']['guid'],
name=asset_deployment_details['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=asset_deployment_details['metadata']['url'],
scoring_endpoint=ScoringEndpointRequest(url=scoring_url) # scoring model without shadow deployment
),
asset_properties=AssetPropertiesRequest(
label_column='SalePrice',
prediction_field='prediction',
feature_fields = feature_cols,
#categorical_fields = categorical_cols,
training_data_reference=TrainingDataReference(type='cos',
location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,
file_name = training_data_file_name),
connection=COSTrainingDataReferenceConnection.from_dict({
"resource_instance_id": COS_RESOURCE_CRN,
"url": COS_ENDPOINT,
"api_key": COS_API_KEY_ID,
"iam_url": IAM_URL}))
),background_mode = False
).result
subscription_id = subscription_details.metadata.id
subscription_id
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
wos_client.data_sets.show()
```
Get subscription list
```
wos_client.subscriptions.show()
```
### Score the model so we can configure monitors
```
import random
fields = feature_data.columns.tolist()
values = random.sample(test_X.tolist(), 2)
scoring_payload = {"input_data": [{"fields": fields, "values": values}]}
predictions = wml_client.deployments.score(deployment_uid, scoring_payload)
print("Single record scoring result:", "\n fields:", predictions["predictions"][0]["fields"], "\n values: ", predictions["predictions"][0]["values"][0])
```
## Check if WML payload logging worked else manually store payload records
```
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
if pl_records_count == 0:
print("Payload logging did not happen, performing explicit payload logging.")
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=scoring_payload,
response={"fields": predictions['predictions'][0]['fields'], "values":predictions['predictions'][0]['values']},
response_time=460
)],background_mode=False)
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
wos_client.data_sets.show_records(payload_data_set_id)
```
# Quality monitoring and feedback logging <a name="quality"></a>
## Enable quality monitoring
```
import time
time.sleep(10)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_feedback_data_size": 50
}
quality_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,
target=target,
parameters=parameters
).result
quality_monitor_instance_id = quality_monitor_details.metadata.id
quality_monitor_instance_id
```
## Feedback logging
The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.
### Get feedback logging dataset ID
```
feedback_dataset_id = None
feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result
print(feedback_dataset)
feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id
if feedback_dataset_id is None:
print("Feedback data set not found. Please check quality monitor status.")
!rm custom_feedback_50_regression.json
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/custom_feedback_50_regression.json
with open ('custom_feedback_50_regression.json')as file:
feedback_data=json.load(file)
wos_client.data_sets.store_records(feedback_dataset_id, request_body=feedback_data, background_mode=False)
wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id)
run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result
wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)
```
# Fairness, drift monitoring and explanations <a name="fairness"></a>
### Fairness configuration
The code below configures fairness monitoring for our model. It turns on monitoring for one features, MSSubClass. In each case, we must specify:
* Which model feature to monitor
* One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes
* One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes
* The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 80%)
Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 50 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data.
```
wos_client.monitor_instances.show()
#wos_client.monitor_instances.delete(drift_monitor_instance_id,background_mode=False)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"features": [
{
"feature": "MSSubClass",
"majority": [[50,70]],
"threshold": 0.8,
"minority": [[80,100]]
}
],
"favourable_class": [[200000,500000]],
"unfavourable_class": [[35000,100000]],
"min_records": 50
}
fairness_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,
target=target,
parameters=parameters).result
fairness_monitor_instance_id =fairness_monitor_details.metadata.id
fairness_monitor_instance_id
```
### Drift configuration
#### Note: you can choose to enable/disable (True or False) model or data drift within config
```
monitor_instances = wos_client.monitor_instances.list().result.monitor_instances
for monitor_instance in monitor_instances:
monitor_def_id=monitor_instance.entity.monitor_definition_id
if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id:
wos_client.monitor_instances.delete(monitor_instance.metadata.id)
print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_samples": 50,
"drift_threshold": 0.1,
"train_drift_model": True,
"enable_model_drift": True,
"enable_data_drift": True
}
drift_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,
target=target,
parameters=parameters
).result
drift_monitor_instance_id = drift_monitor_details.metadata.id
drift_monitor_instance_id
```
## Score the model again now that monitoring is configured
This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness.
```
!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/custom_scoring_payloads_50_regression.json
with open('custom_scoring_payloads_50_regression.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
import random
with open('custom_scoring_payloads_50_regression.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
fields = scoring_data[0]['request']['fields']
values = scoring_data[0]['request']['values']
payload_scoring = {"input_data": [{"fields": fields, "values": values}]}
scoring_response = wml_client.deployments.score(deployment_uid, payload_scoring)
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
if pl_records_count == 2:
print("Payload logging did not happen, performing explicit payload logging.")
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=payload_scoring,
response={"fields": scoring_response['predictions'][0]['fields'], "values":scoring_response['predictions'][0]['values']},
response_time=460
)])
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id))
```
## Run fairness monitor
Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface.
```
run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False)
time.sleep(10)
wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)
```
## Run drift monitor
Kick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API.
```
drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)
```
## Configure Explainability
Finally, we provide OpenScale with the training data to enable and configure the explainability features.
```
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
```
## Run explanation for sample record
```
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]
explanation_task_id
wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()
```
## Additional data to help debugging
```
print('Datamart:', data_mart_id)
print('Model:', model_uid)
print('Deployment:', deployment_uid)
```
## Identify transactions for Explainability
Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard.
```
wos_client.data_sets.show_records(payload_data_set_id, limit=5)
```
## Congratulations!
You have finished the hands-on lab for IBM Watson OpenScale. You can now view the [OpenScale Dashboard](https://aiopenscale.cloud.ibm.com/). Click on the tile for the House Price Regression model to see fairness, accuracy, and performance monitors. Click on the timeseries graph to get detailed information on transactions during a specific time window.
| github_jupyter |
# This Notebook uses a Session Event Dataset from E-Commerce Website (https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store and https://rees46.com/) to build an Outlier Detection based on an Autoencoder.
```
import mlflow
import numpy as np
import os
import shutil
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow_hub as hub
from itertools import product
# enable gpu growth if gpu is available
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
for device in gpu_devices:
tf.config.experimental.set_memory_growth(device, True)
# tf.keras.mixed_precision.set_global_policy('mixed_float16')
tf.config.optimizer.set_jit(True)
%load_ext watermark
%watermark -v -iv
```
## Setting Registry and Tracking URI for MLflow
```
# Use this registry uri when mlflow is created by docker container with a mysql db backend
#registry_uri = os.path.expandvars('mysql+pymysql://${MYSQL_USER}:${MYSQL_PASSWORD}@localhost:3306/${MYSQL_DATABASE}')
# Use this registry uri when mlflow is running locally by the command:
# "mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root ./mlruns --host 0.0.0.0"
registry_uri = 'sqlite:///mlflow.db'
tracking_uri = 'http://localhost:5000'
mlflow.tracking.set_registry_uri(registry_uri)
mlflow.tracking.set_tracking_uri(tracking_uri)
```
# The Data is taken from https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store and https://rees46.com/
## Each record/line in the file has the following fields:
1. event_time: When did the event happened (UTC)
2. event_type: Event type: one of [view, cart, remove_from_cart, purchase]
3. product_id
4. category_id
5. category_code: Category meaningful name (if present)
6. brand: Brand name in lower case (if present)
7. price
8. user_id: Permanent user ID
9. user_session: User session ID
```
# Read first 500.000 Rows
for chunk in pd.read_table("2019-Dec.csv",
sep=",", header=0,
infer_datetime_format=True, low_memory=False, chunksize=500000):
# Filter out other event types than 'view'
chunk = chunk[chunk['event_type'] == 'view']
# Filter out missing 'category_code' rows
chunk = chunk[chunk['category_code'].isna() == False]
chunk.reset_index(drop=True, inplace=True)
# Filter out all Sessions of length 1
count_sessions = chunk.groupby('user_session').count()
window_length = count_sessions.max()[0]
unique_sessions = [count_sessions.index[i] for i in range(
count_sessions.shape[0]) if count_sessions.iloc[i, 0] == 1]
chunk = chunk[~chunk['user_session'].isin(unique_sessions)]
chunk.reset_index(drop=True, inplace=True)
# Text embedding based on https://tfhub.dev/google/nnlm-en-dim50/2
last_category = []
for i, el in enumerate(chunk['category_code']):
last_category.append(el.split('.')[-1])
chunk['Product'] = last_category
embed = hub.load("https://tfhub.dev/google/nnlm-en-dim50/2")
embeddings = embed(chunk['Product'].tolist())
for dim in range(embeddings.shape[1]):
chunk['embedding_'+str(dim)] = embeddings[:, dim]
# Standardization
mean = chunk['price'].mean(axis=0)
print('Mean:', mean)
std = chunk['price'].std(axis=0)
print('Std:', std)
chunk['price_standardized'] = (chunk['price'] - mean) / std
chunk.sort_values(by=['user_session', 'event_time'], inplace=True)
chunk['price_standardized'] = chunk['price_standardized'].astype('float32')
chunk['product_id'] = chunk['product_id'].astype('int32')
chunk.reset_index(drop=True, inplace=True)
print('Sessions:', pd.unique(chunk['user_session']).shape)
print('Unique Products:', pd.unique(chunk['product_id']).shape)
print('Unique category_code:', pd.unique(chunk['category_code']).shape)
columns = ['embedding_'+str(i) for i in range(embeddings.shape[1])]
columns.append('price_standardized')
columns.append('user_session')
columns.append('Product')
columns.append('product_id')
columns.append('category_code')
df = chunk[columns]
break
df
```
## Delete Rows with equal or less than 6 Product Occurrences
```
count_product_id_mapped = df.groupby('product_id').count()
products_to_delete = count_product_id_mapped.loc[count_product_id_mapped['embedding_0'] <= 6].index
products_to_delete
```
## Slice Sessions from the Dataframe
```
list_sessions = []
list_last_clicked = []
list_last_clicked_temp = []
current_id = df.loc[0, 'user_session']
current_index = 0
columns = ['embedding_'+str(i) for i in range(embeddings.shape[1])]
columns.append('price_standardized')
columns.insert(0, 'product_id')
for i in range(df.shape[0]):
if df.loc[i, 'user_session'] != current_id:
list_sessions.append(df.loc[current_index:i-2, columns])
list_last_clicked.append(df.loc[i-1, 'product_id'])
list_last_clicked_temp.append(df.loc[i-1, columns])
current_id = df.loc[i, 'user_session']
current_index = i
```
## Delete Sessions with Length larger than 30
```
print(len(list_sessions))
list_sessions_filtered = []
list_last_clicked_filtered = []
list_last_clicked_temp_filtered = []
for index, session in enumerate(list_sessions):
if not (session.shape[0] > 30):
if not (session['product_id'].isin(products_to_delete).any()):
list_sessions_filtered.append(session)
list_last_clicked_filtered.append(list_last_clicked[index])
list_last_clicked_temp_filtered.append(list_last_clicked_temp[index])
len(list_sessions_filtered)
```
## Slice Sessions if label and last product from session is the same
Example:
- From: session: [ 1506 1506 11410 11410 2826 2826], ground truth: 2826
- To: session: [ 1506 1506 11410 11410], ground truth: 2826
```
print("Length before", len(list_sessions_filtered))
list_sessions_processed = []
list_last_clicked_processed = []
list_session_processed_autoencoder = []
for i, session in enumerate(list_sessions_filtered):
if session['product_id'].values[-1] == list_last_clicked_filtered[i]:
mask = session['product_id'].values == list_last_clicked_filtered[i]
if session[~mask].shape[0] > 0:
list_sessions_processed.append(session[~mask])
list_last_clicked_processed.append(list_last_clicked_filtered[i])
list_session_processed_autoencoder.append(pd.concat([session[~mask], pd.DataFrame(list_last_clicked_temp_filtered[i]).T],
ignore_index=True))
else:
list_sessions_processed.append(session)
list_last_clicked_processed.append(list_last_clicked_filtered[i])
list_session_processed_autoencoder.append(pd.concat([session, pd.DataFrame(list_last_clicked_temp_filtered[i]).T],
ignore_index=True))
print("Length after", len(list_sessions_processed))
```
## Create Item IDs starting from value 1 for Embeddings and One Hot Layer
```
mapping = pd.read_csv('../ID_Mapping.csv')[['Item_ID', 'Mapped_ID']]
dict_items = mapping.set_index('Item_ID').to_dict()['Mapped_ID']
for index, session in enumerate(list_session_processed_autoencoder):
session['product_id'] = session['product_id'].map(dict_items)
# Pad all Sessions with 0. Embedding Layer and LSTM will use Masking to ignore zeros.
list_sessions_padded = []
window_length = 31
for df in list_session_processed_autoencoder:
np_array = df.values
result = np.zeros((window_length, 1), dtype=np.float32)
result[:np_array.shape[0],:1] = np_array[:,:1]
list_sessions_padded.append(result)
# Save the results, because the slicing can take some time
np.save('list_sessions_padded_autoencoder.npy', list_sessions_padded)
sessions_padded = np.array(list_sessions_padded)
n_output_features = int(sessions_padded.max())
n_unique_input_ids = int(sessions_padded.max())
window_length = sessions_padded.shape[1]
n_input_features = sessions_padded.shape[2]
print("n_output_features", n_output_features)
print("n_unique_input_ids", n_unique_input_ids)
print("window_length", window_length)
print("n_input_features", n_input_features)
```
# Training: Start here if the preprocessing was already executed
```
sessions_padded = np.load('list_sessions_padded_autoencoder.npy')
print(sessions_padded.shape)
n_output_features = int(sessions_padded.max())
n_unique_input_ids = int(sessions_padded.max())
window_length = sessions_padded.shape[1]
n_input_features = sessions_padded.shape[2]
```
## Grid Search Hyperparameter
Dictionary with different hyperparameters to train on.
MLflow will track those in a database.
```
grid_search_dic = {'hidden_layer_size': [300],
'batch_size': [32],
'embedding_dim': [200],
'window_length': [window_length],
'dropout_fc': [0.0], #0.2
'n_output_features': [n_output_features],
'n_input_features': [n_input_features]}
# Cartesian product
grid_search_param = [dict(zip(grid_search_dic, v)) for v in product(*grid_search_dic.values())]
grid_search_param
```
### LSTM Autoencoder in functional API
- Input: x rows (time steps) of Item IDs in a Session
- Output: reconstructed Session
```
def build_autoencoder(window_length=50,
units_lstm_layer=100,
n_unique_input_ids=0,
embedding_dim=200,
n_input_features=1,
n_output_features=3,
dropout_rate=0.1):
inputs = keras.layers.Input(
shape=[window_length, n_input_features], dtype=np.float32)
# Encoder
# Embedding Layer
embedding_layer = tf.keras.layers.Embedding(
n_unique_input_ids+1, embedding_dim, input_length=window_length) # , mask_zero=True)
embeddings = embedding_layer(inputs[:, :, 0])
mask = inputs[:, :, 0] != 0
# LSTM Layer 1
lstm1_output, lstm1_state_h, lstm1_state_c = keras.layers.LSTM(units=units_lstm_layer, return_state=True,
return_sequences=True)(embeddings, mask=mask)
lstm1_state = [lstm1_state_h, lstm1_state_c]
# Decoder
# input: lstm1_state_c, lstm1_state_h
decoder_state_c = lstm1_state_c
decoder_state_h = lstm1_state_h
decoder_outputs = tf.expand_dims(lstm1_state_h, 1)
list_states = []
decoder_layer = keras.layers.LSTM(
units=units_lstm_layer, return_state=True, return_sequences=True, unroll=False)
for i in range(window_length):
decoder_outputs, decoder_state_h, decoder_state_c = decoder_layer(decoder_outputs,
initial_state=[decoder_state_h,
decoder_state_c])
list_states.append(decoder_state_h)
stacked = tf.stack(list_states, axis=1)
fc_layer = tf.keras.layers.Dense(
n_output_features+1, kernel_initializer='he_normal')
fc_layer_output = tf.keras.layers.TimeDistributed(fc_layer)(
stacked, mask=mask)
mask_softmax = tf.tile(tf.expand_dims(mask, axis=2),
[1, 1, n_output_features+1])
softmax = tf.keras.layers.Softmax(axis=2, dtype=tf.float32)(
fc_layer_output, mask=mask_softmax)
model = keras.models.Model(inputs=[inputs],
outputs=[softmax])
return model
```
### Convert Numpy Array to tf.data.Dataset for better training performance
The function will return a zipped tf.data.Dataset with the following Shapes:
- x: (batches, window_length)
- y: (batches,)
```
def array_to_tf_data_api(train_data_x, train_data_y, batch_size=64, window_length=50,
validate=False):
"""Applies sliding window on the fly by using the TF Data API.
Args:
train_data_x: Input Data as Numpy Array, Shape (rows, n_features)
batch_size: Batch Size.
window_length: Window Length or Window Size.
future_length: Number of time steps that will be predicted in the future.
n_output_features: Number of features that will be predicted.
validate: True if input data is a validation set and does not need to be shuffled
shift: Shifts the Sliding Window by this Parameter.
Returns:
tf.data.Dataset
"""
X = tf.data.Dataset.from_tensor_slices(train_data_x)
y = tf.data.Dataset.from_tensor_slices(train_data_y)
if not validate:
train_tf_data = tf.data.Dataset.zip((X, y)).cache() \
.shuffle(buffer_size=200000, reshuffle_each_iteration=True)\
.batch(batch_size).prefetch(1)
return train_tf_data
else:
return tf.data.Dataset.zip((X, y)).batch(batch_size)\
.prefetch(1)
```
## Custom TF Callback to log Metrics by MLflow
```
class MlflowLogging(tf.keras.callbacks.Callback):
def __init__(self, **kwargs):
super().__init__() # handles base args (e.g., dtype)
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
for key in keys:
mlflow.log_metric(str(key), logs.get(key), step=epoch)
class CustomCategoricalCrossentropy(keras.losses.Loss):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.bce = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False, reduction='sum')
@tf.function
def call(self, y_true, y_pred):
total = 0.0
for i in tf.range(y_pred.shape[1]):
loss = self.bce(y_true[:, i, 0], y_pred[:, i, :])
total = total + loss
return total
def get_config(self):
base_config = super().get_config()
return {**base_config}
def from_config(cls, config):
return cls(**config)
class CategoricalAccuracy(keras.metrics.Metric):
def __init__(self, name="categorical_accuracy", **kwargs):
super(CategoricalAccuracy, self).__init__(name=name, **kwargs)
self.true = self.add_weight(name="true", initializer="zeros")
self.count = self.add_weight(name="count", initializer="zeros")
self.accuracy = self.add_weight(name="count", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, "float32")
y_pred = tf.cast(y_pred, "float32")
mask = y_true[:, :, 0] != 0
argmax = tf.cast(tf.argmax(y_pred, axis=2), "float32")
temp = argmax == y_true[:, :, 0]
true = tf.reduce_sum(tf.cast(temp[mask], dtype=tf.float32))
self.true.assign_add(true)
self.count.assign_add(
tf.cast(tf.shape(temp[mask])[0], dtype="float32"))
self.accuracy.assign(tf.math.divide(self.true, self.count))
def result(self):
return self.accuracy
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.accuracy.assign(0.0)
class CategoricalSessionAccuracy(keras.metrics.Metric):
def __init__(self, name="categorical_session_accuracy", **kwargs):
super(CategoricalSessionAccuracy, self).__init__(name=name, **kwargs)
self.true = self.add_weight(name="true", initializer="zeros")
self.count = self.add_weight(name="count", initializer="zeros")
self.accuracy = self.add_weight(name="count", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, "float32")
y_pred = tf.cast(y_pred, "float32")
mask = y_true[:, :, 0] != 0
argmax = tf.cast(tf.argmax(y_pred, axis=2), "float32")
temp = argmax == y_true[:, :, 0]
temp = tf.reduce_all(temp, axis=1)
true = tf.reduce_sum(tf.cast(temp, dtype=tf.float32))
self.true.assign_add(true)
self.count.assign_add(tf.cast(tf.shape(temp)[0], dtype="float32"))
self.accuracy.assign(tf.math.divide(self.true, self.count))
def result(self):
return self.accuracy
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.accuracy.assign(0.0)
```
# Training
```
with mlflow.start_run() as parent_run:
for params in grid_search_param:
batch_size = params['batch_size']
window_length = params['window_length']
embedding_dim = params['embedding_dim']
dropout_fc = params['dropout_fc']
hidden_layer_size = params['hidden_layer_size']
n_output_features = params['n_output_features']
n_input_features = params['n_input_features']
with mlflow.start_run(nested=True) as child_run:
# log parameter
mlflow.log_param('batch_size', batch_size)
mlflow.log_param('window_length', window_length)
mlflow.log_param('hidden_layer_size', hidden_layer_size)
mlflow.log_param('dropout_fc_layer', dropout_fc)
mlflow.log_param('embedding_dim', embedding_dim)
mlflow.log_param('n_output_features', n_output_features)
mlflow.log_param('n_unique_input_ids', n_unique_input_ids)
mlflow.log_param('n_input_features', n_input_features)
model = build_autoencoder(window_length=window_length,
n_output_features=n_output_features,
n_unique_input_ids=n_unique_input_ids,
n_input_features=n_input_features,
embedding_dim=embedding_dim,
units_lstm_layer=hidden_layer_size,
dropout_rate=dropout_fc)
data = array_to_tf_data_api(sessions_padded,
sessions_padded,
window_length=window_length,
batch_size=batch_size)
model.compile(loss=CustomCategoricalCrossentropy(),#tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False, reduction='sum'),
optimizer=keras.optimizers.Nadam(learning_rate=1e-3),
metrics=[CategoricalAccuracy(), CategoricalSessionAccuracy()])
model.fit(data, shuffle=True, initial_epoch=0, epochs=20,
callbacks=[MlflowLogging()])
model.compile()
model.save("./tmp")
model.save_weights('weights')
mlflow.tensorflow.log_model(tf_saved_model_dir='./tmp',
tf_meta_graph_tags='serve',
tf_signature_def_key='serving_default',
artifact_path='saved_model',
registered_model_name='Session Based LSTM Recommender')
shutil.rmtree("./tmp")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rwarnung/datacrunch-notebooks/blob/master/dcrunch_R_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Data crunch example R script**
---
author: sweet-richard
date: Jan 30, 2022
required packages:
* `tidyverse` for data handling
* `feather` for efficient loading of data
* `xgboost` for predictive modelling
* `httr` for the automatic upload.
```
library(tidyverse)
library(feather)
```
First, we set some **parameters**.
* `is_download` controls whether you want to download data or just read prevously downloaded data
* `is_upload` set this to TRUE for automatic upload.
* `nrounds` is a parameter for `xgboost` that we set to 100 for illustration. You might want to adjust the paramters of xgboost.
```
#' ## Parameters
file_name_train = "train_data.feather"
file_name_test ="test_data.feather"
is_download = TRUE # set this to true to download new data or to FALSE to load data in feather format
is_upload = FALSE # set this to true to upload a submission
nrounds = 300 # you might want to adjust this one and other parameters of xgboost
```
In the **functions** section we defined the correlation measure that we use to measure performance.
```
#' ## Functions
#+
getCorrMeasure = function(actual, predicted) {
cor_measure = cor(actual, predicted, method="spearman")
return(cor_measure)
}
```
Now, we either **download** the current data from the servers or load them in feather format. Furthermore, we define the features that we actually want to use. In this illustration we use all of them but `id` and `Moons`.
```
#' ## Download data
#' after the download, data is stored in feather format to be read on demand quickly. Data is stored in integer format to save memory.
#+
if( is_download ) {
cat("\n start download")
train_datalink_X = 'https://tournament.datacrunch.com/data/X_train.csv'
train_datalink_y = 'https://tournament.datacrunch.com/data/y_train.csv'
hackathon_data_link = 'https://tournament.datacrunch.com/data/X_test.csv'
train_dataX = read_csv(url(train_datalink_X))
train_dataY = read_csv(url(train_datalink_y))
test_data = read_csv(url(hackathon_data_link))
train_data =
bind_cols( train_dataX, train_dataY)
train_data = train_data %>% mutate_at(vars(starts_with("feature_")), list(~as.integer(.*100)))
feather::write_feather(train_data, path = paste0("./", file_name_train))
test_data = test_data %>% mutate_at(vars(starts_with("feature_")), list(~as.integer(.*100)))
feather::write_feather(test_data, path = paste0("./", file_name_test))
names(train_data)
nrow(train_data)
nrow(test_data)
cat("\n data is downloaded")
} else {
train_data = feather::read_feather(path = paste0("./", file_name_train))
test_data = feather::read_feather(path = paste0("./", file_name_test))
}
## set vars used for modelling
model_vars = setdiff(names(test_data), c("id","Moons"))
```
Next we fit our go-to algorithm **xgboost** with mainly default parameters, only `eta` and `max_depth` are set.
```
#' ## Fit xgboost
#+ cache = TRUE
library(xgboost, warn.conflicts = FALSE)
# custom loss function for eval
corrmeasure <- function(preds, dtrain) {
labels <- getinfo(dtrain, "label")
corrm <- as.numeric(cor(labels, preds, method="spearman"))
return(list(metric = "corr", value = corrm))
}
eval_metric_string = "rmse"
my_objective = "reg:squarederror"
tree.params = list(
booster = "gbtree", eta = 0.01, max_depth = 5,
tree_method = "hist", # tree_method = "auto",
objective = my_objective)
cat("\n starting xgboost \n")
```
**First target** `target_r`
```
# first target target_r then g and b
################
current_target = "target_r"
dtrain = xgb.DMatrix(train_data %>% select(one_of(model_vars)) %>% as.matrix(), label = train_data %>% select(one_of(current_target)) %>% as.matrix())
xgb.model.tree = xgb.train(data = dtrain,
params = tree.params, nrounds = nrounds, verbose = 1,
print_every_n = 50L, eval_metric = corrmeasure)
xgboost_tree_train_pred1 = predict(xgb.model.tree, train_data %>% select(one_of(model_vars)) %>% as.matrix())
xgboost_tree_live_pred1 = predict(xgb.model.tree, test_data %>% select(one_of(model_vars)) %>% as.matrix())
cor_train = getCorrMeasure(train_data %>% select(one_of(current_target)), xgboost_tree_train_pred1)
cat("\n : metric: ", eval_metric_string, "\n")
print(paste0("Corrm on train: ", round(cor_train,4)))
print(paste("xgboost", current_target, "ready"))
```
**Second target** `target_g`
```
# second target target_g
################
current_target = "target_g"
dtrain = xgb.DMatrix(train_data %>% select(one_of(model_vars)) %>% as.matrix(), label = train_data %>% select(one_of(current_target)) %>% as.matrix())
xgb.model.tree = xgb.train(data = dtrain,
params = tree.params, nrounds = nrounds, verbose = 1,
print_every_n = 50L, eval_metric = corrmeasure)
xgboost_tree_train_pred2 = predict(xgb.model.tree, train_data %>% select(one_of(model_vars)) %>% as.matrix())
xgboost_tree_live_pred2 = predict(xgb.model.tree, test_data %>% select(one_of(model_vars)) %>% as.matrix())
cor_train = getCorrMeasure(train_data %>% select(one_of(current_target)), xgboost_tree_train_pred2)
cat("\n : metric: ", eval_metric_string, "\n")
print(paste0("Corrm on train: ", round(cor_train,4)))
print(paste("xgboost", current_target, "ready"))
```
**Third target** `target_b`
```
# third target target_b
################
current_target = "target_b"
dtrain = xgb.DMatrix(train_data %>% select(one_of(model_vars)) %>% as.matrix(), label = train_data %>% select(one_of(current_target)) %>% as.matrix())
xgb.model.tree = xgb.train(data = dtrain,
params = tree.params, nrounds = nrounds, verbose = 1,
print_every_n = 50L, eval_metric = corrmeasure)
xgboost_tree_train_pred3 = predict(xgb.model.tree, train_data %>% select(one_of(model_vars)) %>% as.matrix())
xgboost_tree_live_pred3 = predict(xgb.model.tree, test_data %>% select(one_of(model_vars)) %>% as.matrix())
cor_train = getCorrMeasure(train_data %>% select(one_of(current_target)), xgboost_tree_train_pred3)
cat("\n : metric: ", eval_metric_string, "\n")
print(paste0("Corrm on train: ", round(cor_train,4)))
print(paste("xgboost", current_target, "ready"))
```
Then we produce simply histogram plots to see whether the predictions are plausible and prepare a **submission file**:
```
#' ## Submission
#' simple histograms to check the submissions
#+
hist(xgboost_tree_live_pred1)
hist(xgboost_tree_live_pred2)
hist(xgboost_tree_live_pred3)
#' create submission file
#+
sub_df = tibble(target_r = xgboost_tree_live_pred1,
target_g = xgboost_tree_live_pred2,
target_b = xgboost_tree_live_pred3)
file_name_submission = paste0("gbTree_", gsub("-","",Sys.Date()), ".csv")
sub_df %>% readr::write_csv(file = paste0("./", file_name_submission))
nrow(sub_df)
cat("\n submission file written")
```
Finally, we can **automatically upload** the file to the server:
```
#' ## Upload submission
#+
if( is_upload ) {
library(httr)
API_KEY = "YourKeyHere"
response <- POST(
url = "https://tournament.crunchdao.com/api/v2/submissions",
query = list(apiKey = API_KEY),
body = list(
file = upload_file(path = paste0("./", file_name_submission))
),
encode = c("multipart")
);
status <- status_code(response)
if (status == 200) {
print("Submission submitted :)")
} else if (status == 400) {
print("ERR: The file must not be empty")
print("You have send a empty file.")
} else if (status == 401) {
print("ERR: Your email hasn't been verified")
print("Please verify your email or contact a cruncher.")
} else if (status == 403) {
print("ERR: Not authentified")
print("Is the API Key valid?")
} else if (status == 404) {
print("ERR: Unknown API Key")
print("You should check that the provided API key is valid and is the same as the one you've received by email.")
} else if (status == 409) {
print("ERR: Duplicate submission")
print("Your work has already been submitted with the same exact results, if you think that this a false positive, contact a cruncher.")
print("MD5 collision probability: 1/2^128 (source: https://stackoverflow.com/a/288519/7292958)")
} else if (status == 422) {
print("ERR: API Key is missing or empty")
print("Did you forget to fill the API_KEY variable?")
} else if (status == 423) {
print("ERR: Submissions are close")
print("You can only submit during rounds eg: Friday 7pm GMT+1 to Sunday midnight GMT+1.")
print("Or the server is currently crunching the submitted files, please wait some time before retrying.")
} else if (status == 423) {
print("ERR: Too many submissions")
} else {
content <- httr::content(response)
print("ERR: Server returned: " + toString(status))
print("Ouch! It seems that we were not expecting this kind of result from the server, if the probleme persist, contact a cruncher.")
print(paste("Message:", content$message, sep=" "))
}
# DEVELOPER WARNING:
# THE API ERROR CODE WILL BE HANDLER DIFFERENTLY IN A NEAR FUTURE!
# PLEASE STAY UPDATED BY JOINING THE DISCORD (https://discord.gg/veAtzsYn3M) AND READING THE NEWSLETTER EMAIL
}
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Terrain/srtm_mtpi.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_mtpi.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Datasets/Terrain/srtm_mtpi.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_mtpi.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for the first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
dataset = ee.Image('CSP/ERGo/1_0/Global/SRTM_mTPI')
srtmMtpi = dataset.select('elevation')
srtmMtpiVis = {
'min': -200.0,
'max': 200.0,
'palette': ['0b1eff', '4be450', 'fffca4', 'ffa011', 'ff0000'],
}
Map.setCenter(-105.8636, 40.3439, 11)
Map.addLayer(srtmMtpi, srtmMtpiVis, 'SRTM mTPI')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
#Introduction to Data Science
See [Lesson 1](https://www.udacity.com/course/intro-to-data-analysis--ud170)
You should run it in local Jupyter env as this notebook refers to local dataset
```
import unicodecsv
from datetime import datetime as dt
enrollments_filename = 'dataset/enrollments.csv'
engagement_filename = 'dataset/daily_engagement.csv'
submissions_filename = 'dataset/project_submissions.csv'
## Longer version of code (replaced with shorter, equivalent version below)
def read_csv(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
return list(reader)
enrollments = read_csv(enrollments_filename)
daily_engagement = read_csv(engagement_filename)
project_submissions = read_csv(submissions_filename)
def renameKey(data, fromKey, toKey):
for rec in data:
if fromKey in rec:
rec[toKey] = rec[fromKey]
del rec[fromKey]
renameKey(daily_engagement, 'acct', 'account_key')
def cleanDataTypes():
def fixIntFloat(data, field):
if field not in data:
print(f'WARNING : Field {field} is not in {data}')
value = data[field]
if value == '':
data[field] = None
else:
data[field] = int(float(value))
def fixFloat(data, field):
if field not in data:
print(f'WARNING : Field {field} is not in {data}')
value = data[field]
if value == '':
data[field] = None
else:
data[field] = float(value)
def fixDate(data, field):
if field not in data:
print(f'WARNING : Field {field} is not in {data}')
value = data[field]
if value == '':
data[field] = None
else:
data[field] = dt.strptime(value, '%Y-%m-%d')
def fixBool(data, field):
if field not in data:
print(f'WARNING : Field {field} is not in {data}')
value = data[field]
if value == 'True':
data[field] = True
elif value == 'False':
data[field] = False
else:
print(f"WARNING: invalid boolean '{value}' value converted to False in {data}")
data[field] = False
def fixInt(data, field):
if field not in data:
print(f'WARNING : Field {field} is not in {data}')
value = data[field]
if value == '':
data[field] = None
else:
data[field] = int(value)
#clean data types
for rec in enrollments:
fixInt(rec, 'days_to_cancel')
fixDate(rec, 'join_date')
fixDate(rec, 'cancel_date')
fixBool(rec, 'is_udacity')
fixBool(rec, 'is_canceled')
for rec in daily_engagement:
fixDate(rec, 'utc_date')
fixIntFloat(rec, 'num_courses_visited')
fixFloat(rec, 'total_minutes_visited')
fixIntFloat(rec, 'lessons_completed')
fixIntFloat(rec, 'projects_completed')
for rec in project_submissions:
fixDate(rec, 'creation_date')
fixDate(rec, 'completion_date')
cleanDataTypes()
print(f"enrollments[0] = {enrollments[0]}\n")
print(f"daily_engagement[0] = {daily_engagement[0]}\n")
print(f"project_submissions[0] = {project_submissions[0]}\n")
from collections import defaultdict
def getUniqueAccounts(data):
accts = defaultdict(list)
i = 0
for record in data:
accountKey = record['account_key']
accts[accountKey].append(i)
i+=1
return accts
enrollment_num_rows = len(enrollments)
enrollment_unique_students = getUniqueAccounts(enrollments)
enrollment_num_unique_students = len(enrollment_unique_students)
engagement_num_rows = len(daily_engagement)
engagement_unique_students = getUniqueAccounts(daily_engagement)
engagement_num_unique_students = len(engagement_unique_students)
submission_num_rows = len(project_submissions)
submission_unique_students = getUniqueAccounts(project_submissions)
submission_num_unique_students = len(submission_unique_students)
print(f"enrollments total={enrollment_num_rows}, unique={enrollment_num_unique_students}")
print(f"engagements total={engagement_num_rows}, unique={engagement_num_unique_students}")
print(f"submissions total={submission_num_rows} unique={submission_num_unique_students}")
for enrollment_acct in enrollment_unique_students:
if enrollment_acct not in engagement_unique_students:
#print(enrollment_unique_students[enrollment])
enrollment_id = enrollment_unique_students[enrollment_acct][0]
enrollment = enrollments[enrollment_id]
print(f"Strange student : enrollment={enrollment}")
break
strange_enrollments_num_by_different_date = 0
for enrollment_acct in enrollment_unique_students:
if enrollment_acct not in engagement_unique_students:
for enrollment_id in enrollment_unique_students[enrollment_acct]:
enrollment = enrollments[enrollment_id]
if enrollment['join_date'] != enrollment['cancel_date']:
strange_enrollments_num_by_different_date += 1
#print(f"Strange student with different dates : enrollments[{enrollment_id}]={enrollment}\n")
print(f"Number of enrolled and cancelled at different dates but not engaged (problemactic accounts) : {strange_enrollments_num_by_different_date}\n")
num_problems = 0
for enrollment in enrollments:
student = enrollment['account_key']
if student not in engagement_unique_students and enrollment['join_date'] != enrollment['cancel_date']:
num_problems += 1
#print(enrollment)
print(f'Number of problematic account records : {num_problems}')
def getRealAccounts(enrollmentData):
result = []
for rec in enrollmentData:
if not rec['is_udacity']:
result.append(rec)
return result
real_enrollments = getRealAccounts(enrollments)
print(f'Real account : {len(real_enrollments)}')
def getPaidStudents(enrollmentData):
freePeriodDays = 7
result = {}
#result1 = {}
for rec in enrollmentData:
if rec['cancel_date'] == None or rec['days_to_cancel'] > freePeriodDays:
accountKey = rec['account_key']
joinDate = rec['join_date']
if accountKey not in result or joinDate > result[accountKey]:
result[accountKey] = joinDate
#result1[accountKey] = joinDate
'''
for accountKey, joinDate in result.items():
joinDate1 = result1[accountKey]
if joinDate != joinDate1:
print(f"{accountKey} : {joinDate} != {joinDate1}")
'''
return result
paid_students = getPaidStudents(real_enrollments)
print(f'Paid students : {len(paid_students)}')
def isEngagementWithingOneWeek(joinDate, engagementDate):
#if joinDate > engagementDate:
# print(f'WARNING: join date is after engagement date')
timeDelta = engagementDate - joinDate
return 0 <= timeDelta.days and timeDelta.days < 7
def collectPaidEnagagementsInTheFirstWeek():
result = []
i = 0
for engagement in daily_engagement:
accountKey = engagement['account_key']
if accountKey in paid_students:
joinDate = paid_students[accountKey]
engagementDate = engagement['utc_date']
if isEngagementWithingOneWeek(joinDate, engagementDate):
result.append(i)
i+=1
return result
paid_engagement_in_first_week = collectPaidEnagagementsInTheFirstWeek()
print(f'Number of paid engagements in the first week : {len(paid_engagement_in_first_week)}')
from collections import defaultdict
import numpy as np
def groupEngagementsByAccounts(engagements):
result = defaultdict(list)
for engagementId in engagements:
engagement = daily_engagement[engagementId]
accountKey = engagement['account_key']
result[accountKey].append(engagementId)
return result
first_week_paid_engagements_by_account = groupEngagementsByAccounts(paid_engagement_in_first_week)
def sumEngagementsStatByAccount(engagements, getStatValue):
result = {}
for accountKey, engagementIds in engagements.items():
stat_sum = 0
for engagementId in engagementIds:
engagement = daily_engagement[engagementId]
stat_sum += getStatValue(engagement)
result[accountKey] = stat_sum
return result
def printStats(getStatValue, statLabel):
first_week_paid_engagements_sum_stat_by_account = sumEngagementsStatByAccount(first_week_paid_engagements_by_account, getStatValue)
first_week_paid_engagements_sum_stat = list(first_week_paid_engagements_sum_stat_by_account.values())
print(f'Average {statLabel} spent by paid accounts during the first week : {np.mean(first_week_paid_engagements_sum_stat)}')
print(f'StdDev {statLabel} spent by paid accounts during the first week : {np.std(first_week_paid_engagements_sum_stat)}')
print(f'Min {statLabel} spent by paid accounts during the first week : {np.min(first_week_paid_engagements_sum_stat)}')
print(f'Max {statLabel} spent by paid accounts during the first week : {np.max(first_week_paid_engagements_sum_stat)}')
print('\n')
printStats((lambda data : data['total_minutes_visited']), 'minutes')
printStats((lambda data : data['lessons_completed']), 'lessons')
printStats((lambda data : 1 if data['num_courses_visited'] > 0 else 0), 'days')
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = {'746169184', '3176718735'}
passing_grades = {'DISTINCTION', 'PASSED'} #{'', 'INCOMPLETE', 'DISTINCTION', 'PASSED', 'UNGRADED'}
#passing_grades = {'PASSED'} #{'', 'INCOMPLETE', 'DISTINCTION', 'PASSED', 'UNGRADED'}
passing_engagement = []
non_passing_engagement = []
for accountKey, engagementIds in first_week_paid_engagements_by_account.items():
if accountKey in submission_unique_students:
submissionIds = submission_unique_students[accountKey]
isPassed = False
for submissionId in submissionIds:
submission = project_submissions[submissionId]
if submission['assigned_rating'] in passing_grades and submission['lesson_key'] in subway_project_lesson_keys:
isPassed = True
break
if isPassed:
passing_engagement += engagementIds
else:
non_passing_engagement += engagementIds
else:
non_passing_engagement += engagementIds
print(f'First week engagements with passing grade : {len(passing_engagement)}')
print(f'First week engagements with non-passing grade : {len(non_passing_engagement)}')
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
passing_engagement_by_account = groupEngagementsByAccounts(passing_engagement)
non_passing_engagement_by_account = groupEngagementsByAccounts(non_passing_engagement)
def getArgStatEngagements(engagementIds, getStatValue):
stat_sum = 0
stat_num = 0
for engagementId in engagementIds:
engagement = daily_engagement[engagementId]
stat_sum += getStatValue(engagement)
stat_num += 1
if stat_num > 0:
return stat_sum / stat_num
else:
return 0
#sumEngagementsStatByAccount(first_week_paid_engagements_by_account, getStatValue)
passed_minutes = list(sumEngagementsStatByAccount(passing_engagement_by_account, (lambda data : data['total_minutes_visited'])).values())
non_passed_minutes = list(sumEngagementsStatByAccount(non_passing_engagement_by_account, (lambda data : data['total_minutes_visited'])).values())
passed_lessons = list(sumEngagementsStatByAccount(passing_engagement_by_account, (lambda data : data['lessons_completed'])).values())
non_passed_lessons = list(sumEngagementsStatByAccount(non_passing_engagement_by_account, (lambda data : data['lessons_completed'])).values())
passed_days = list(sumEngagementsStatByAccount(passing_engagement_by_account, (lambda data : 1 if data['num_courses_visited'] > 0 else 0)).values())
non_passed_days = list(sumEngagementsStatByAccount(non_passing_engagement_by_account, (lambda data : 1 if data['num_courses_visited'] > 0 else 0)).values())
print(f'Passed Avg Minutes = {np.mean(passed_minutes)}')
print(f'Non passed Avg Minutes = {np.mean(non_passed_minutes)}')
print(f'Passed Avg Lessons = {np.mean(passed_lessons)}')
print(f'Non passed Avg Lessons = {np.mean(non_passed_lessons)}')
print(f'Passed Avg Days = {np.mean(passed_days)}')
print(f'Non passed Avg Days = {np.mean(non_passed_days)}')
######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.hist(passed_minutes, color ='green')
plt.hist(non_passed_minutes, color ='lightblue')
plt.xlabel('Number of minutes')
plt.title('Passed (green) VS Non-passed (light-blue) students')
#sns.displot(passed_minutes, color ='green')
#sns.displot(non_passed_minutes, color ='lightblue')
plt.hist(passed_lessons, color ='green')
plt.hist(non_passed_lessons, color ='lightblue')
plt.xlabel('Number of lessons')
plt.title('Passed (green) VS Non-passed (light-blue) students')
plt.hist(passed_days, color ='green', bins = 8)
plt.xlabel('Number of days')
plt.title('Passed students')
plt.hist(non_passed_days, color ='lightblue', bins = 8)
plt.xlabel('Number of days')
plt.title('Non-passed students')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import keras
from keras.models import Sequential,Model
from keras.layers import Dense, Dropout,BatchNormalization,Input
from keras.optimizers import RMSprop
from keras.regularizers import l2,l1
from keras.optimizers import Adam
from sklearn.model_selection import LeaveOneOut
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
from keras.callbacks import EarlyStopping
df = pd.read_csv("../../out_data/MLDB.csv")
first_gene_index = df.columns.get_loc("rrrD")
X, Y = np.split(df, [first_gene_index], axis=1)
X = X.values
X = X-0.5
Y1 = Y.values[:,1]
Y2 = Y.values[:,1]
X.shape
import collections
Model_setting = collections.namedtuple('Model_setting','num_layers num_node alpha drop_rate act_method lr regularization \
patience')
setting_ = [1,100, 0.5, 0.2, 'tanh', 0.01, 'l2', 3]
setting = Model_setting(*setting_)
setting = setting._asdict()
setting
def getModel(setting,num_input=84):
regularizer = l1(setting['alpha']) if setting['regularization']=='l1' else l2(setting['alpha'])
model = Sequential()
for i in range(setting['num_layers']):
if i==0:
model.add(Dense(setting['num_node'], input_shape=(num_input,), activation=setting['act_method'],\
kernel_regularizer = regularizer))
model.add(Dropout(setting['drop_rate']))
else:
model.add(Dense(setting['num_node']//(2**i), activation=setting['act_method']))
model.add(Dropout(setting['drop_rate']))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=setting['lr']), metrics=['accuracy'])
return model
num_output_ = 3
def create_model(num_input = 84,num_output = num_output_):
X_input = Input(shape=(num_input,))
X = Dense(64)(X_input)
X = Dropout(0.2)(X)
X = Dense(32)(X)
Ys= []
for i in range(num_output):
Ys.append(Dense(1, activation = 'sigmoid')(X))
model = Model(inputs=[X_input],outputs = Ys)
model.compile(loss=['binary_crossentropy']*num_output,loss_weights=[1.]*num_output,optimizer=Adam(lr=setting['lr']), metrics=['accuracy'])
return model
model = create_model()
callbacks = [EarlyStopping(monitor='loss',min_delta=0,patience=setting['patience'])]
ys = [*((Y.values).T[:num_output_])]
model.fit(X,ys,epochs = 50, verbose = 1,callbacks =callbacks)
history = final_model.fit(X_train, [Y_train, Y_train2],
nb_epoch = 100,
batch_size = 256,
verbose=1,
validation_data=(X_test, [Y_test, Y_test2]),
callbacks=[reduce_lr, checkpointer],
shuffle=True)
callbacks = [EarlyStopping(monitor='loss',min_delta=0,patience=setting['patience'])]
def cross_validation(X,Y,setting,num_input):
model = getModel(setting,num_input)
preds = []
for train, test in LeaveOneOut().split(X, Y):
model.fit(X[train,:],Y[train],epochs=20,verbose=0, callbacks =callbacks)
probas_ = model.predict(X[test,:])
preds.append(probas_[0][0])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(Y, preds)
roc_auc = auc(fpr, tpr)
if roc_auc < 0.5:
roc_auc = 1 - roc_auc
return roc_auc
def backward_selection(X,Y,setting):
survive_index=[i for i in range(X.shape[1])]
best_perf=0
for i in range(len(survive_index)-1):
perfs = []
print(survive_index)
for index in survive_index:
print(index)
survive_index_copy = [i for i in survive_index if i!=index]
perfs.append(cross_validation(X[:,survive_index_copy],Y,setting,num_input = len(survive_index)-1))
print("best_perf",best_perf)
max_index = np.argmax(perfs)
current_best = np.max(perfs)
print("current_best",current_best)
if current_best > best_perf:
best_perf = current_best
survive_index.remove(survive_index[max_index])
else:
break
return (survive_index,best_perf)
backward_selection(X[:,0:10],Y,setting)
fpr, tpr, thresholds = roc_curve(Y, preds)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, alpha=0.3)
plt.title('(AUC = %0.2f)' % (roc_auc))
plt.show()
def cross_validation(X=X,Y=Y,epochs_=20,num_input_ = 84):
model = getModel(num_input=num_input_)
preds = []
for train, test in LeaveOneOut().split(X, Y):
model.fit(X,Y,epochs=epochs_,verbose=0)
# print(test)
probas_ = model.predict(X[test,:])
preds.append(probas_[0][0])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(Y, preds)
roc_auc = auc(fpr, tpr)
return roc_auc
survive_index=[i for i in range(4)]
def backward_selection(survive_index):
for i in range(len(survive_index)-1):
perfs = []
best_perf=0
for index in survive_index:
print(index,"\n")
survive_index_copy = [i for i in survive_index if i!=index]
perfs.append(cross_validation(X=X[:,survive_index_copy],Y=Y,epochs_=20,num_input_ = len(survive_index)-1))
max_index = np.argmax(perfs)
current_best = np.max(perfs)
print(current_best)
if current_best > best_perf:
best_perf = current_best
survive_index.remove(survive_index[max_index])
else:
break
return survive_index
backward_selection(survive_index)
max_index = np.argmax(perfs)
survive_index[max_index]
fpr, tpr, thresholds = roc_curve(Y, preds)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, alpha=0.3)
plt.title('(AUC = %0.2f)' % (roc_auc))
plt.show()
```
| github_jupyter |
This notebook contains code for model comparison. Optimal hyperparameters for models are supposed to be already found.
# Imports
```
#imports
!pip install scipydirect
import math
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn import preprocessing
from sklearn.preprocessing import normalize
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import NearestNeighbors
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import balanced_accuracy_score
import collections
from collections import Counter
from imblearn.over_sampling import SMOTE
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from IPython import display
from scipydirect import minimize # for DIvided RECTangles (DIRECT) method
import time
```
# 1) Describe classes of the following models: AdaFair, SMOTEBoost, ASR
#### 1.1) AdaFair
```
#AdaFair
class AdaFairClassifier(AdaBoostClassifier):
def __init__(self,
base_estimator=None, *,
n_estimators=50,
learning_rate=1,
algorithm='SAMME',
random_state=42,
protected=None,
epsilon = 0):
super().__init__(
base_estimator=base_estimator,
n_estimators=n_estimators,
algorithm = algorithm,
learning_rate=learning_rate,
random_state=random_state)
self.protected = np.array(protected)
self.algorithm = algorithm
self.epsilon = epsilon
def _boost_discrete(self, iboost, X, y, sample_weight, random_state):
"""Implement a single boost using the SAMME discrete algorithm."""
estimator = self._make_estimator(random_state=random_state)
estimator.fit(X, y, sample_weight=sample_weight)
y_predict = estimator.predict(X)
if iboost == 0:
self.classes_ = getattr(estimator, 'classes_', None)
self.n_classes_ = len(self.classes_)
# Instances incorrectly classified
incorrect = y_predict != y
# Error fraction
estimator_error = np.mean(
np.average(incorrect, weights=sample_weight, axis=0))
# Stop if classification is perfect
if estimator_error <= 0:
return sample_weight, 1., 0.
n_classes = self.n_classes_
# Stop if the error is at least as bad as random guessing
if estimator_error >= 1. - (1. / n_classes):
self.estimators_.pop(-1)
if len(self.estimators_) == 0:
print ("BaseClassifier in AdaBoostClassifier ensemble is worse than random, ensemble can not be fit.")
raise ValueError('BaseClassifier in AdaBoostClassifier '
'ensemble is worse than random, ensemble '
'can not be fit.')
return None, None, None
if len(self.protected) != len(y):
print ("Error: not given or given incorrect list of protected objects")
return None, None, None
#Compute fairness-related costs
#CUMULATIVE prediction
y_cumulative_pred = list(self.staged_predict(X))[0]
u = np.array(self.get_fairness_related_costs(y, y_cumulative_pred, self.protected))
# Boost weight using multi-class AdaBoost SAMME alg
estimator_weight = self.learning_rate * (
np.log((1. - estimator_error) / estimator_error) +
np.log(n_classes - 1.))
# Only boost the weights if I will fit again
if not iboost == self.n_estimators - 1:
# Only boost positive weights
sample_weight = sample_weight * np.exp(estimator_weight * incorrect * (sample_weight > 0)) * (np.ones(len(u)) + u)
return sample_weight, estimator_weight, estimator_error
def get_fairness_related_costs(self, y, y_pred_t, protected):
y_true_protected, y_pred_protected, y_true_non_protected, y_pred_non_protected = separate_protected_from_non_protected(y, y_pred_t, protected)
#Rates for non protected group
tp, tn, fp, fn = tp_tn_fp_fn(y_true_non_protected, y_pred_non_protected)
FPR_non_protected = fp / (fp + tn)
FNR_non_protected = fn / (fn + tp)
#Rates for protected group
tp, tn, fp, fn = tp_tn_fp_fn(y_true_protected, y_pred_protected)
FPR_protected = fp / (fp + tn)
FNR_protected = fn / (fn + tp)
delta_FPR = - FPR_non_protected + FPR_protected
delta_FNR = - FNR_non_protected + FNR_protected
self.delta_FPR = delta_FPR
self.delta_FNR = delta_FNR
#Compute fairness related costs
u = []
for y_i, y_pred_t__i, protection in zip(y, y_pred_t, protected):
if y_i == 1 and y_pred_t__i == 0 and abs(delta_FNR) > self.epsilon and protection == 1 and delta_FNR > 0:
u.append(abs(delta_FNR))
elif y_i == 1 and y_pred_t__i == 0 and abs(delta_FNR) > self.epsilon and protection == 0 and delta_FNR < 0:
u.append(abs(delta_FNR))
elif y_i == 0 and y_pred_t__i == 1 and abs(delta_FPR) > self.epsilon and protection == 1 and delta_FPR > 0:
u.append(abs(delta_FPR))
elif y_i == 0 and y_pred_t__i == 1 and abs(delta_FPR) > self.epsilon and protection == 0 and delta_FPR < 0:
u.append(abs(delta_FPR))
else: u.append(0)
return u
```
#### 1.2 SMOTEBoost
```
#SMOTEBoost
random_state = 42
#4, y - true label
def ada_boost_eps(y, y_pred_t, distribution):
eps = np.sum((1 - (y == y_pred_t) + (np.logical_not(y) == y_pred_t)) * distribution)
return eps
#5
def ada_boost_betta(eps):
betta = eps/(1 - eps)
return betta
def ada_boost_w(y, y_pred_t):
w = 0.5 * (1 + (y == y_pred_t) - (np.logical_not(y) == y_pred_t))
return w
#6
def ada_boost_distribution(distribution, betta, w):
distribution = distribution * betta ** w / np.sum(distribution)
return distribution
def min_target(y):
minority_target = min(Counter(y), key=Counter(y).get)
return minority_target
class SMOTEBoost():
def __init__(self,
n_samples = 100,
k_neighbors = 5,
n_estimators = 50, #n_estimators = T
base_classifier = None,
random_state = 42,
get_eps = ada_boost_eps,
get_betta = ada_boost_betta,
get_w = ada_boost_w,
update_distribution=ada_boost_distribution):
self.n_samples = n_samples
self.k_neighbors = k_neighbors
self.n_estimators = n_estimators
self.base_classifier = base_classifier
self.random_state = random_state
self.get_eps = get_eps
self.get_betta = get_betta
self.get_w = get_w
self.update_distribution = update_distribution
def fit(self, X, y):
X = np.array(X)
distribution = np.ones(X.shape[0], dtype=float) / X.shape[0]
self.classifiers = []
self.betta = []
y = np.array(y)
for i in range(self.n_estimators):
minority_class = min_target(y)
X_min = X[np.where(y == minority_class)]
# create a new classifier
self.classifiers.append(self.base_classifier())
# SMOTE
self.smote = SMOTE(n_samples=self.n_samples,
k_neighbors=self.k_neighbors,
random_state=self.random_state)
self.smote.fit(X_min)
X_syn = self.smote.sample()
y_syn = np.full(X_syn.shape[0], fill_value=minority_class, dtype=np.int64)
# Modify distribution
distribution_syn = np.empty(X_syn.shape[0], dtype=np.float64)
distribution_syn[:] = 1. / X.shape[0]
mod_distribution = np.append(distribution, distribution_syn).reshape(1, -1)
mod_distribution = np.squeeze(normalize(mod_distribution, axis=1, norm='l1'))
# Concatenate original and synthetic datasets for training a weak learner
mod_X = np.vstack((X, X_syn))
mod_y = np.append(y, y_syn)
# Train a weak lerner
self.classifiers[-1].fit(mod_X, mod_y, sample_weight=mod_distribution)
# Make a prediction for the original dataset
y_pred_t = self.classifiers[-1].predict(X)
# Compute the pseudo-loss of hypothesis
eps_t = ada_boost_eps(y, y_pred_t, distribution)
betta_t = ada_boost_betta(eps_t)
w_t = ada_boost_w(y, y_pred_t)
self.betta.append(betta_t)
# Update distribution and normalize
distribution = ada_boost_distribution(distribution, betta_t, w_t)
def predict(self, X):
final_predictions_0 = np.zeros(X.shape[0])
final_predictions_1 = np.zeros(X.shape[0])
y_pred = np.empty(X.shape[0])
# get the weighted votes of the classifiers
for i in range(len(self.betta)):
h_i = self.classifiers[i].predict(X)
final_predictions_0 = final_predictions_0 + math.log(1/self.betta[i])*(h_i == 0)
final_predictions_1 = final_predictions_1 + math.log(1/self.betta[i])*(h_i == 1)
for i in range(len(final_predictions_0)):
if final_predictions_0[i] > final_predictions_1[i]:
y_pred[i] = 0
else:
y_pred[i] = 1
return y_pred
class SMOTE():
def __init__(self, n_samples, k_neighbors=5, random_state=None):
self.n_samples = n_samples
self.k = k_neighbors
self.random_state = random_state
def fit(self, X):
self.X = X
self.n_features = self.X.shape[1]
self.neigh = NearestNeighbors(n_neighbors=self.k) #k + 1
self.neigh.fit(self.X)
return self
def sample(self):
np.random.seed(seed=self.random_state)
S = np.zeros(shape=(self.n_samples, self.n_features))
for i in range(self.n_samples):
j = np.random.randint(0, self.X.shape[0])
nn = self.neigh.kneighbors(self.X[j].reshape(1, -1),
return_distance=False)[:, 1:]
nn_index = np.random.choice(nn[0])
print (self.X[nn_index], self.X[j])
dif = self.X[nn_index] - self.X[j]
gap = np.random.random()
S[i, :] = self.X[j, :] + gap * dif[:]
return S
```
#### 1.3 Adaptive sensitive reweighting
```
#Adaptive sensitive reweighting
class ReweightedClassifier:
def __init__(self, baze_clf, alpha, beta, params = {}):
"""
Input:
baze_clf - object from sklearn with methods .fit(sample_weight=), .predict(), .predict_proba()
alpha - list of alphas for sensitive and non-sensitive objects [alpha, alpha']
beta - list of betss for sensitive and non-sensitive objects [beta, beta']
params - **kwargs compatible with baze_clf
"""
self.baze_clf = baze_clf
self.model = None
self.alpha = np.array(alpha)
self.alphas = None
self.beta = np.array(beta)
self.betas = None
self.weights = None
self.prev_weights = None
self.params = params
def reweight_dataset(self, length, error, minority_idx):
"""
This function recalculates values of weights and saves their previous values
"""
if self.alphas is None or self.betas is None:
# If alpha_0, alpha_1 or beta_0, beta_1 are not defined,
# then define alpha_0 and beta_0 to every object from non-sensitive class,
# and alpha_1 and beta_1 to every object from sensitive class (minority).
self.alphas = np.ones(length) * self.alpha[0]
self.betas = np.ones(length) * self.beta[0]
self.alphas[minority_idx] = self.alpha[1]
self.betas[minority_idx] = self.beta[1]
# w_i_prev <- w_i for all i in dataset
self.prev_weights = self.weights.copy()
# w_i = alpha_i * L_{beta_i} (P'(y_pred_i =! y_true_i))
# + (1 - alpha_i) * L_{beta_i} (-P'(y_pred_i =! y_true_i)),
# where
# L_{beta_i} (x) = exp(beta_i * x)
self.weights = self.alphas * np.exp(self.betas * error) \
+ (1 - self.alphas) * np.exp(- self.betas * error)
def pRule(self, prediction, minority_idx):
"""
This function calculates
| P(y_pred_i = 1 | i in S) P(y_pred_i = 1 | i not in S) |
pRule = min { ---------------------------- , ---------------------------- }
| P(y_pred_i = 1 | i not in S) P(y_pred_i = 1 | i in S) |
S - the group of sensitive objects
---------
Input:
prediction - labels ({0,1}) of a sample for which pRule is calculated
minority_idx - indexes of objects from a sensitive group
"""
# majority indexes = set of all indexes / set of minority indexes,
# where set of all indexes = all numbers from 0 to size of sample (=len(prediction))
majority_idx = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx)
# minority = P(y_pred_i = 1 | i in minority)
# majority = P(y_pred_i = 1 | i in majority)
minority = prediction[minority_idx].mean()
majority = prediction[list(majority_idx)].mean()
minority = np.clip(minority, 1e-10, 1 - 1e-10)
majority = np.clip(majority, 1e-10, 1 - 1e-10)
return min(minority/majority, majority/minority)
def fit(self, X_train, y_train, X_test, y_test, minority_idx, verbose=True, max_iter=30):
# Initialize equal weights w_i = 1
self.weights = np.ones(len(y_train))
self.prev_weights = np.zeros(len(y_train))
# Lists for saving metrics
accuracys = []
pRules = []
differences = []
accuracy_plus_prule = []
# Adaptive Sensitive Reweighting
iteration = 0
while ((self.prev_weights - self.weights) ** 2).mean() > 10**(-6) and iteration < max_iter:
iteration += 1
# Train classifier on X_train with weights w_i
self.model = self.baze_clf(**self.params)
self.model.fit(X_train, y_train, sample_weight = self.weights)
# Use classifier to obtain P`(y_pred_i =! y_pred) (here it is called 'error')
prediction_proba = self.model.predict_proba(X_train)[:, 1]
error = (y_train == 1) * (1 - prediction_proba) + (y_train == 0) * prediction_proba
# Update weights
self.reweight_dataset(len(y_train), error, minority_idx)
# Get metrics on X_train
prediction = self.model.predict(X_train)
accuracys.append(accuracy_score(prediction, y_train))
pRules.append(self.pRule(prediction, minority_idx))
accuracy_plus_prule.append(accuracys[-1] + pRules[-1])
differences.append(((self.prev_weights - self.weights)**2).mean()**0.5)
# Visualize metrics if it's needed
if verbose:
display.clear_output(True)
fig, axes = plt.subplots(ncols=2, nrows=2, figsize=(16, 7))
metrics = [accuracys, pRules, accuracy_plus_prule, differences]
metrics_names = ["Accuracy score", "pRule", "Accuracy + pRule", "Mean of weight edits"]
for name, metric, ax in zip(metrics_names, metrics, axes.flat):
ax.plot(metric, label='train')
ax.set_title(f'{name}, iteration {iteration}')
ax.legend()
if name == "Mean of weight edits":
ax.set_yscale('log')
plt.show()
return accuracys, pRules, accuracy_plus_prule
def predict(self, X):
return self.model.predict(X)
def predict_proba(self, X):
return self.model.predict_proba(X)
def get_metrics_test(self, X_test, y_test, minority_idx_test):
"""
Obtain pRule and accuracy for trained model
"""
# Obtain predictions on X_test to calculate metrics
prediction_test = self.model.predict(X_test)
# Get metrics on test
accuracy_test = accuracy_score(prediction_test, y_test)
pRule_test = self.pRule(prediction_test, minority_idx_test)
return accuracy_test, pRule_test
def prep_train_model(X_train, y_train, X_test, y_test, minority_idx):
def train_model(a):
"""
Function of 4 variables (a[0], a[1], a[2], a[3]) that will be minimized by DIRECT method.
a[0], a[1] = alpha, alpha'
a[2], a[3] = beta, beta'
"""
model = ReweightedClassifier(LogisticRegression, [a[0], a[1]], [a[2], a[3]], params = {"max_iter": 4000})
_, _, accuracy_plus_prule = model.fit(X_train, y_train, X_test, y_test, minority_idx)
# We'll maximize [acc + pRule] which we get at the last iteration of Adaptive Sensitive Reweighting
return - accuracy_plus_prule[-1]
return train_model # return function for optimization
```
#### 1.4 Some functions used for fitting models, calculating metrics, and data separation
```
#This function returns binary list of whether the corresponding feature is protected (1) or not (0)
def get_protected_instances(X, feature, label):
protected = []
for i in range(len(X)):
if X.iloc[i][feature] == label:
protected.append(1)
else: protected.append(0)
return protected
#To calculate TRP and TNR for protected and non-protected groups, first separate them
def separate_protected_from_non_protected(y_true, y_pred, protected):
y_true_protected = []
y_pred_protected = []
y_true_non_protected = []
y_pred_non_protected = []
for true_label, pred_label, is_protected in zip(y_true, y_pred, protected):
if is_protected == 1:
y_true_protected.append(true_label)
y_pred_protected.append(pred_label)
elif is_protected == 0:
y_true_non_protected.append(true_label)
y_pred_non_protected.append(pred_label)
else:
print("Error: invalid value of in protected array ", is_protected)
return 0,0,0,0
return (np.array(y_true_protected), np.array(y_pred_protected), np.array(y_true_non_protected), np.array(y_pred_non_protected))
def tp_tn_fp_fn(y_true, y_pred):
matrix = confusion_matrix(y_true, y_pred)
tp = matrix[1][1]
tn = matrix[0][0]
fp = matrix[0][1]
fn = matrix[1][0]
return (tp, tn, fp, fn)
#same pRule as in ASR, but used for calculating this metric in others classifiers' predictions
def pRule(prediction, minority_idx):
"""
This function calculates
| P(y_pred_i = 1 | i in S) P(y_pred_i = 1 | i not in S) |
pRule = min { ---------------------------- , ---------------------------- }
| P(y_pred_i = 1 | i not in S) P(y_pred_i = 1 | i in S) |
S - the group of sensitive objects
---------
Input:
prediction - labels ({0,1}) of a sample for which pRule is calculated
minority_idx - indexes of objects from a sensitive group
"""
# majority indexes = set of all indexes / set of minority indexes,
# where set of all indexes = all numbers from 0 to size of sample (=len(prediction))
majority_idx = set(np.linspace(0, len(prediction) - 1, len(prediction), dtype = int)).difference(minority_idx)
# minority = P(y_pred_i = 1 | i in minority)
# majority = P(y_pred_i = 1 | i in majority)
minority = prediction[minority_idx].mean()
majority = prediction[list(majority_idx)].mean()
minority = np.clip(minority, 1e-10, 1 - 1e-10)
majority = np.clip(majority, 1e-10, 1 - 1e-10)
return min(minority/majority, majority/minority)
```
# 2) Download datasets (run one cell for one dataset)
Adult census
```
#Adult census
#adult_census_names = ['old_id' ,'age','workclass','fnlwgt','education','education_num','marital_status','occupation','relationship','race','sex','capital_gain','capital_loss','hours_per_week','native_country']
X_train = pd.read_csv("splits/X_train_preprocessed_adult.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
X_test = pd.read_csv("splits/X_test_preprocessed_adult.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
y_train = pd.read_csv("splits/y_train_preprocessed_adult.csv")['income']
y_test = pd.read_csv("splits/y_test_preprocessed_adult.csv")['income']
reweight_prediction = "/content/predictions/y_pred_test_adult.csv"
#X_test, X_train = preprocess_adult_census(X_test.drop('old_id', axis = 1)), preprocess_adult_census(X_train.drop('old_id', axis = 1))
#y_test, y_train = adult_label_transform(y_test)['income'], adult_label_transform(y_train)['income']
#Obtain protected group (used in AdaFair)
protected_test = get_protected_instances(X_test, 'gender', 1)
protected_train = get_protected_instances(X_train, 'gender', 1)
# Obtain indexes of sensitive class (for regression algorithm)
minority_idx = X_train.reset_index(drop=True).index.values[X_train["gender"] == 1]
minority_idx_test = X_test.reset_index(drop=True).index.values[X_test["gender"] == 1]
#best hyperparameters for AdaFair
adafair_max_depth = 2
adafair_n_estimators = 20
#result of ASR optimizing
a_1 = [0.01851852, 0.99382716, 1.16666667, 2.94444444]
#print(len(X_train),len(y_train))
#X_train.head()
```
Bank
```
X_train = pd.read_csv("splits/X_train_preprocessed_bank.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
X_test = pd.read_csv("splits/X_test_preprocessed_bank.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
y_train = pd.read_csv("splits/y_train_preprocessed_bank.csv")['y']
y_test = pd.read_csv("splits/y_test_preprocessed_bank.csv")['y']
reweight_prediction = "/content/predictions/y_pred_test_bank.csv"
#X_test, X_train = preprocess_adult_census(X_test.drop('old_id', axis = 1)), preprocess_adult_census(X_train.drop('old_id', axis = 1))
#y_test, y_train = adult_label_transform(y_test)['income'], adult_label_transform(y_train)['income']
#Obtain protected group (used in AdaFair)
protected_test = get_protected_instances(X_test, 'age', 1)
protected_train = get_protected_instances(X_train, 'age', 1)
# Obtain indexes of sensitive class (for regression algorithm)
minority_idx = X_train.reset_index(drop=True).index.values[X_train["age"] == 1]
minority_idx_test = X_test.reset_index(drop=True).index.values[X_test["age"] == 1]
#best hyperparameters for AdaFair
adafair_max_depth = 2
adafair_n_estimators = 9
#result of ASR optimizing
a_1 = [0.87037037, 0.01851852, 2.72222222, 1.57407407]
#print(len(X_train),len(y_train))
#X_train.head()
```
Compass
```
X_train = pd.read_csv("splits/X_train_preprocessed_compas.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
X_test = pd.read_csv("splits/X_test_preprocessed_compas.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
y_train = pd.read_csv("splits/y_train_preprocessed_compas.csv")['two_year_recid']
y_test = pd.read_csv("splits/y_test_preprocessed_compas.csv")['two_year_recid']
reweight_prediction = "/content/predictions/y_pred_test_compas.csv"
#X_test, X_train = preprocess_adult_census(X_test.drop('old_id', axis = 1)), preprocess_adult_census(X_train.drop('old_id', axis = 1))
#y_test, y_train = adult_label_transform(y_test)['income'], adult_label_transform(y_train)['income']
#Obtain protected group (used in AdaFair)
protected_test = get_protected_instances(X_test, 'race', 0)
protected_train = get_protected_instances(X_train, 'race', 0)
# Obtain indexes of sensitive class (for regression algorithm)
minority_idx = X_train.reset_index(drop=True).index.values[X_train["race"] == 0]
minority_idx_test = X_test.reset_index(drop=True).index.values[X_test["race"] == 0]
#best hyperparameters for AdaFair
adafair_max_depth = 4
adafair_n_estimators = 5
#result of ASR optimizing
a_1 = [0.0308642, 0.72222222, 0.5, 0.45061728]
#print(len(X_train),len(y_train))
#y_train.head()
```
KDD Census
```
#adult_census_names = ['old_id' ,'age','workclass','fnlwgt','education','education_num','marital_status','occupation','relationship','race','sex','capital_gain','capital_loss','hours_per_week','native_country']
X_train = pd.read_csv("splits/X_train_preprocessed_kdd.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
X_test = pd.read_csv("splits/X_test_preprocessed_kdd.csv").drop("Unnamed: 0", axis = 1)#, names = adult_census_names).iloc[1:]
y_train = pd.read_csv("splits/y_train_preprocessed_kdd.csv")['income_50k']
y_test = pd.read_csv("splits/y_test_preprocessed_kdd.csv")['income_50k']
reweight_prediction = "/content/predictions/y_pred_test_kdd.csv"
#X_test, X_train = preprocess_adult_census(X_test.drop('old_id', axis = 1)), preprocess_adult_census(X_train.drop('old_id', axis = 1))
#y_test, y_train = adult_label_transform(y_test)['income'], adult_label_transform(y_train)['income']
#Obtain protected group (used in AdaFair)
protected_test = get_protected_instances(X_test, 'sex', 1)
protected_train = get_protected_instances(X_train, 'sex', 1)
# Obtain indexes of sensitive class (for regression algorithm)
minority_idx = X_train.reset_index(drop=True).index.values[X_train["sex"] == 1]
minority_idx_test = X_test.reset_index(drop=True).index.values[X_test["sex"] == 1]
#best hyperparameters for AdaFair
adafair_max_depth = 5
adafair_n_estimators = 11
#result of ASR optimizing
a_1 = [0.01851852, 0.99382716, 1.16666667, 2.94444444]
#print(len(X_train),len(y_train))
#X_train.head()
```
# 3) Create models, train classifiers
```
#Regression
# Create model with obtained hyperparameters alpha, alpha', beta, beta'
model_reweighted_classifier = ReweightedClassifier(LogisticRegression, [a_1[0], a_1[1]], [a_1[2], a_1[3]], params = {"max_iter": 4})
# Train model on X_train
model_reweighted_classifier.fit(X_train, y_train, X_test, y_test, minority_idx, verbose=False)
# Calculate metrics (pRule, accuracy) on X_test
accuracy_test, pRule_test = model_reweighted_classifier.get_metrics_test(X_test, y_test, minority_idx_test)
#print('ASR+CULEP for X_test')
#print(f"prule = {pRule_test:.6}, accuracy = {accuracy_test:.6}")
#print(f"prule + accuracy = {(pRule_test + accuracy_test):.6}")
#SMOTEBOOST
max_depth = 2
n_samples = 100
k_neighbors = 5
n_estimators = 5 # T
random_state = 42
get_base_clf = lambda: DecisionTreeClassifier(max_depth = max_depth)
smoteboost1 = SMOTEBoost(n_samples = n_samples,
k_neighbors = k_neighbors,
n_estimators = n_estimators,
base_classifier = get_base_clf,
random_state = random_state)
smoteboost1.fit(X_train, y_train)
#smote_boost = smoteboost1.predict(X_test)
#AdaFair
#Tolerance to unfairness
epsilon = 0
get_base_clf = lambda: DecisionTreeClassifier(max_depth=adafair_max_depth)
ada_fair = AdaFairClassifier(DecisionTreeClassifier(max_depth=adafair_max_depth),
algorithm="SAMME",
n_estimators=adafair_n_estimators,
protected = protected_train,
epsilon = epsilon)
ada_fair.fit(X_train, y_train)
#Adaboost
ada_boost_sklearn = AdaBoostClassifier(DecisionTreeClassifier(max_depth=max_depth),
algorithm="SAMME",
n_estimators=n_estimators)
ada_boost_sklearn.fit(X_train, y_train)
```
# 4) Compute and plot metrics
#### 4.1 Compute
```
names = ['ada_fair','ada_boost_sklearn', 'smoteboost', "reweighted_classifier"]
classifiers = [ada_fair, ada_boost_sklearn, smoteboost1, model_reweighted_classifier]
accuracy = {}
bal_accuracy = {}
TPR = {}
TNR = {}
eq_odds = {}
p_rule = {}
#DELETA
#y_test = y_test[:][1]
for i, clf in enumerate(classifiers):
print(names[i])
prediction = clf.predict(X_test)
if i == 3:
prediction = np.array(pd.read_csv(reweight_prediction, names = ['idx', 'pred'])['pred'][1:])
print((prediction), (y_test))
print(len(prediction), len(y_test))
accuracy[names[i]] = (prediction == y_test).sum() * 1. / len(y_test)
bal_accuracy[names[i]] = balanced_accuracy_score(y_test, prediction)
print('accuracy {}: {}'.format(names[i], (prediction == y_test).sum() * 1. / len(y_test)))
print('balanced accuracy {}: {}'.format(names[i], balanced_accuracy_score(y_test, prediction)))
y_true_protected, y_pred_protected, y_true_non_protected, y_pred_non_protected = separate_protected_from_non_protected(y_test, prediction, protected_test)
#TPR for protected group
tp, tn, fp, fn = tp_tn_fp_fn(y_true_protected, y_pred_protected)
TPR_protected = tp / (tp + fn)
TNR_protected = tn / (tn + fp)
FPR_protected = fp / (fp + tn)
FNR_protected = fn / (fn + tp)
print('TPR protected {}: {}'.format(names[i], TPR_protected))
print('TNR protected {}: {}'.format(names[i], TNR_protected))
TPR[names[i] + ' protected'] = TPR_protected
TNR[names[i] + ' protected'] = TNR_protected
#TPR for non protected group
tp, tn, fp, fn = tp_tn_fp_fn(y_true_non_protected, y_pred_non_protected)
TPR_non_protected = tp / (tp + fn)
TNR_non_protected = tn / (tn + fp)
FPR_non_protected = fp / (fp + tn)
FNR_non_protected = fn / (fn + tp)
print('TPR non protected {}: {}'.format(names[i], TPR_non_protected))
print('TNR non protected {}: {}'.format(names[i], TNR_non_protected))
delta_FPR = -FPR_non_protected + FPR_protected
delta_FNR = -FNR_non_protected + FNR_protected
eq_odds[names[i]] = abs(delta_FPR) + abs(delta_FNR)
TPR[names[i] + ' non protected'] = TPR_non_protected
TNR[names[i] + ' non protected'] = TNR_non_protected
p_rule[names[i]] = pRule(prediction, minority_idx_test)
print('pRule {}: {}'.format(names[i],p_rule[names[i]]))
```
#### 4.2 Plot
```
labels = ['Accuracy', 'Bal. accuracy', 'Eq. odds','TPR prot.', 'TPR non-prot', 'TNR prot.', 'TNR non-prot.', 'pRule']
adaFair_metrics = [accuracy['ada_fair'], bal_accuracy['ada_fair'], eq_odds['ada_fair'], TPR['ada_fair protected'], TPR['ada_fair non protected'], TNR['ada_fair protected'], TNR['ada_fair non protected'], p_rule['ada_fair']]
adaBoost_metrics = [accuracy['ada_boost_sklearn'], bal_accuracy['ada_boost_sklearn'], eq_odds['ada_boost_sklearn'], TPR['ada_boost_sklearn protected'], TPR['ada_boost_sklearn non protected'], TNR['ada_boost_sklearn protected'], TNR['ada_boost_sklearn non protected'], p_rule['ada_boost_sklearn']]
SMOTEBoost_metrics = [accuracy['smoteboost'], bal_accuracy['smoteboost'], eq_odds['smoteboost'], TPR['smoteboost protected'], TPR['smoteboost non protected'], TNR['smoteboost protected'], TNR['smoteboost non protected'], p_rule['smoteboost']]
reweighted_class_metrics = [accuracy['reweighted_classifier'], bal_accuracy['reweighted_classifier'], eq_odds['reweighted_classifier'], TPR['reweighted_classifier protected'], TPR['reweighted_classifier non protected'], TNR['reweighted_classifier protected'], TNR['reweighted_classifier non protected'],p_rule['reweighted_classifier']]
x = np.arange(len(labels)) # the label locations
width = 0.20 # the width of the bars
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 19}
matplotlib.rc('font', **font)
fig, ax = plt.subplots(figsize = [15, 5])
rects1 = ax.bar(x - 1.5*width, adaBoost_metrics, width, label='AdaBoost', color='gray')
rects2 = ax.bar(x - width/2 , SMOTEBoost_metrics, width, label='SMOTEBoost', color='red')
rects3 = ax.bar(x + width/2, adaFair_metrics, width, label='AdaFair', color='green')
rects4 = ax.bar(x + 1.5*width, reweighted_class_metrics, width, label='ASR', color='blue')
ax.set_ylabel('Scores')
#ax.set_title('Scores by algoritms')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend(loc=9, ncol = 4, bbox_to_anchor=(0.5, 1.15))
ax.grid()
fig.tight_layout()
plt.show()
```
| github_jupyter |
# Apple and Tesla Split on 8/31
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
import warnings
warnings.filterwarnings("ignore")
# for fetching data
import yfinance as yf
# input
# Coronavirus 2nd Wave
title = "Apple and Tesla"
symbols = ['AAPL', 'TSLA']
start = '2020-01-01'
end = '2020-08-31'
df = pd.DataFrame()
for symbol in symbols:
df[symbol] = yf.download(symbol, start, end)['Adj Close']
from datetime import datetime
from dateutil import relativedelta
d1 = datetime.strptime(start, "%Y-%m-%d")
d2 = datetime.strptime(end, "%Y-%m-%d")
delta = relativedelta.relativedelta(d2, d1)
print('How many years of investing?')
print('%s years' % delta.years)
number_of_years = delta.years
days = (df.index[-1] - df.index[0]).days
days
df.head()
df.tail()
df.min()
df.max()
df.describe()
plt.figure(figsize=(12,8))
plt.plot(df)
plt.title(title + ' Closing Price')
plt.legend(labels=df.columns)
plt.show()
# Normalize the data
normalize = (df - df.min())/ (df.max() - df.min())
plt.figure(figsize=(18,12))
plt.plot(normalize)
plt.title(title + ' Stocks Normalize')
plt.legend(labels=normalize.columns)
plt.show()
stock_rets = df.pct_change().dropna()
plt.figure(figsize=(12,8))
plt.plot(stock_rets)
plt.title(title + ' Stocks Returns')
plt.legend(labels=stock_rets.columns)
plt.show()
plt.figure(figsize=(12,8))
plt.plot(stock_rets.cumsum())
plt.title(title + ' Stocks Returns Cumulative Sum')
plt.legend(labels=stock_rets.columns)
plt.show()
sns.set(style='ticks')
ax = sns.pairplot(stock_rets, diag_kind='hist')
nplot = len(stock_rets.columns)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
ax = sns.PairGrid(stock_rets)
ax.map_upper(plt.scatter, color='purple')
ax.map_lower(sns.kdeplot, color='blue')
ax.map_diag(plt.hist, bins=30)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
plt.figure(figsize=(10,10))
corr = stock_rets.corr()
# plot the heatmap
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
cmap="Reds")
# Box plot
stock_rets.plot(kind='box',figsize=(24,8))
rets = stock_rets.dropna()
plt.figure(figsize=(16,8))
plt.scatter(rets.std(), rets.mean(),alpha = 0.5)
plt.title('Stocks Risk & Returns')
plt.xlabel('Risk')
plt.ylabel('Expected Returns')
plt.grid(which='major')
for label, x, y in zip(rets.columns, rets.std(), rets.mean()):
plt.annotate(
label,
xy = (x, y), xytext = (50, 50),
textcoords = 'offset points', ha = 'right', va = 'bottom',
arrowprops = dict(arrowstyle = '-', connectionstyle = 'arc3,rad=-0.3'))
rets = stock_rets.dropna()
area = np.pi*20.0
sns.set(style='darkgrid')
plt.figure(figsize=(16,8))
plt.scatter(rets.std(), rets.mean(), s=area)
plt.xlabel("Risk", fontsize=15)
plt.ylabel("Expected Return", fontsize=15)
plt.title("Return vs. Risk for Stocks", fontsize=20)
for label, x, y in zip(rets.columns, rets.std(), rets.mean()) :
plt.annotate(label, xy=(x,y), xytext=(50, 0), textcoords='offset points',
arrowprops=dict(arrowstyle='-', connectionstyle='bar,angle=180,fraction=-0.2'),
bbox=dict(boxstyle="round", fc="w"))
def annual_risk_return(stock_rets):
tradeoff = stock_rets.agg(["mean", "std"]).T
tradeoff.columns = ["Return", "Risk"]
tradeoff.Return = tradeoff.Return*252
tradeoff.Risk = tradeoff.Risk * np.sqrt(252)
return tradeoff
tradeoff = annual_risk_return(stock_rets)
tradeoff
import itertools
colors = itertools.cycle(["r", "b", "g"])
tradeoff.plot(x = "Risk", y = "Return", kind = "scatter", figsize = (13,9), s = 20, fontsize = 15, c='g')
for i in tradeoff.index:
plt.annotate(i, xy=(tradeoff.loc[i, "Risk"]+0.002, tradeoff.loc[i, "Return"]+0.002), size = 15)
plt.xlabel("Annual Risk", fontsize = 15)
plt.ylabel("Annual Return", fontsize = 15)
plt.title("Return vs. Risk for " + title + " Stocks", fontsize = 20)
plt.show()
rest_rets = rets.corr()
pair_value = rest_rets.abs().unstack()
pair_value.sort_values(ascending = False)
# Normalized Returns Data
Normalized_Value = ((rets[:] - rets[:].min()) /(rets[:].max() - rets[:].min()))
Normalized_Value.head()
Normalized_Value.corr()
normalized_rets = Normalized_Value.corr()
normalized_pair_value = normalized_rets.abs().unstack()
normalized_pair_value.sort_values(ascending = False)
print("Stock returns: ")
print(rets.mean())
print('-' * 50)
print("Stock risks:")
print(rets.std())
table = pd.DataFrame()
table['Returns'] = rets.mean()
table['Risk'] = rets.std()
table.sort_values(by='Returns')
table.sort_values(by='Risk')
rf = 0.01
table['Sharpe Ratio'] = (table['Returns'] - rf) / table['Risk']
table
table['Max Returns'] = rets.max()
table['Min Returns'] = rets.min()
table['Median Returns'] = rets.median()
total_return = stock_rets[-1:].transpose()
table['Total Return'] = 100 * total_return
table
table['Average Return Days'] = (1 + total_return)**(1 / days) - 1
table
initial_value = df.iloc[0]
ending_value = df.iloc[-1]
table['CAGR'] = ((ending_value / initial_value) ** (252.0 / days)) -1
table
table.sort_values(by='Average Return Days')
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
ori=pd.read_csv('website_data_20190225.csv')
ori.drop(['STATE','DISTRICT','WLCODE','SITE_TYPE','TEH_NAME'],axis=1,inplace=True)
ori.replace(to_replace="'0",value=0,inplace=True)
ori.head()
dataset=pd.DataFrame().reindex_like(ori)
dataset1=pd.DataFrame().reindex_like(ori)
dataset.dropna(inplace=True)
dataset1.dropna(inplace=True)
# j=0
# for i in range(0,ori.shape[0]):
# if ori['STATE'][i]=='RJ':
# dataset.loc[j] = ori.iloc[i]
# j+=1
# dataset.drop(['STATE'],axis=1,inplace=True)
# j=0
# for i in range(0,ori.shape[0]):
# if ori['DISTRICT'][i]=='Ajmer':
# dataset.loc[j] = ori.iloc[i]
# j+=1
# dataset.drop(['DISTRICT'],axis=1,inplace=True)
j=0
for i in range(0,ori.shape[0]):
if ori['BLOCK_NAME'][i]=='Arain':
dataset1.loc[j] = ori.iloc[i]
j+=1
dataset1.drop(['BLOCK_NAME'],axis=1,inplace=True)
dataset.drop(['BLOCK_NAME'],axis=1,inplace=True)
j=0
for i in range(0,dataset1.shape[0]):
if dataset1['SITE_NAME'][i]=='Sanpla':
dataset.loc[j] = dataset1.iloc[i]
j+=1
lat=dataset["LAT"][0]
lon=dataset["LON"][0]
dataset.drop(['SITE_NAME','LAT','LON'],axis=1,inplace=True)
dataset
for i in range(0,dataset.shape[0]):
dataset['MONSOON'][i]=float(dataset['MONSOON'][i])
dataset['POMRB'][i]=float(dataset['POMRB'][i])
dataset['POMKH'][i]=float(dataset['POMKH'][i])
dataset['PREMON'][i]=float(dataset['PREMON'][i])
dataset['YEAR_OBS'][i]=int(dataset['YEAR_OBS'][i])
first=list(dataset['MONSOON'])
second=list(dataset['POMRB'])
third=list(dataset['POMKH'])
fourth=list(dataset['PREMON'])
dataset['MONSOON']=pd.core.frame.DataFrame(x+y+z+w for x, y,z,w in zip(first, second, third, fourth))
dataset.drop(['POMRB','POMKH','PREMON'],axis=1,inplace=True)
dataset = dataset.iloc[::-1]
dataset
dataset['YEAR_OBS']=(dataset['YEAR_OBS']).apply(np.int64)
dataset['YEAR_OBS']=pd.to_datetime(dataset['YEAR_OBS'],yearfirst=True,format='%Y',infer_datetime_format=True)
indexedDataset=dataset.set_index(['YEAR_OBS'])
from datetime import datetime
indexedDataset.head(50)
plt.xlabel('Years')
plt.ylabel('Water-Level')
plt.plot(indexedDataset)
```
- A stationary time series is one whose statistical properties such as mean, variance, autocorrelation, etc. are all constant over time. Most statistical forecasting methods are based on the assumption that the time series can be rendered approximately stationary (i.e., "stationarized") through the use of mathematical transformations. A stationarized series is relatively easy to predict: you simply predict that its statistical properties will be the same in the future as they have been in the past!
- We can check stationarity using the following:
- - Plotting Rolling Statistics: We can plot the moving average or moving variance and see if it varies with time. This is more of a visual technique.
- - Dickey-Fuller Test: This is one of the statistical tests for checking stationarity. Here the null hypothesis is that the TimeSeries is non-stationary. The test results comprise of a Test Statistic and some Critical Values for difference confidence levels. If the ‘Test Statistic’ is less than the ‘Critical Value’, we can reject the null hypothesis and say that the series is stationary.
```
from statsmodels.tsa.stattools import adfuller
def test_stationary(timeseries):
#Determing rolling statistics
moving_average=timeseries.rolling(window=12).mean()
standard_deviation=timeseries.rolling(window=12).std()
#Plot rolling statistics:
plt.plot(timeseries,color='blue',label="Original")
plt.plot(moving_average,color='red',label='Mean')
plt.plot(standard_deviation,color='black',label='Standard Deviation')
plt.legend(loc='best') #best for axes
plt.title('Rolling Mean & Deviation')
# plt.show()
plt.show(block=False)
#Perform Dickey-Fuller test:
print('Results Of Dickey-Fuller Test')
tstest=adfuller(timeseries['MONSOON'],autolag='AIC')
tsoutput=pd.Series(tstest[0:4],index=['Test Statistcs','P-value','#Lags used',"#Obs. used"])
#Test Statistics should be less than the Critical Value for Stationarity
#lesser the p-value, greater the stationarity
# print(list(dftest))
for key,value in tstest[4].items():
tsoutput['Critical Value (%s)'%key]=value
print((tsoutput))
test_stationary(indexedDataset)
```
- There are 2 major reasons behind non-stationaruty of a TS:
- - Trend – varying mean over time. For eg, in this case we saw that on average, the number of passengers was growing over time.
- - Seasonality – variations at specific time-frames. eg people might have a tendency to buy cars in a particular month because of pay increment or festivals.
## Indexed Dataset Logscale
```
indexedDataset_logscale=np.log(indexedDataset)
test_stationary(indexedDataset_logscale)
```
## Dataset Log Minus Moving Average (dl_ma)
```
rolmeanlog=indexedDataset_logscale.rolling(window=12).mean()
dl_ma=indexedDataset_logscale-rolmeanlog
dl_ma.head(12)
dl_ma.dropna(inplace=True)
dl_ma.head(12)
test_stationary(dl_ma)
```
## Exponential Decay Weighted Average (edwa)
```
edwa=indexedDataset_logscale.ewm(halflife=12,min_periods=0,adjust=True).mean()
plt.plot(indexedDataset_logscale)
plt.plot(edwa,color='red')
```
## Dataset Logscale Minus Moving Exponential Decay Average (dlmeda)
```
dlmeda=indexedDataset_logscale-edwa
test_stationary(dlmeda)
```
## Eliminating Trend and Seasonality
- Differencing – taking the differece with a particular time lag
- Decomposition – modeling both trend and seasonality and removing them from the model.
# Differencing
## Dataset Log Div Shifting (dlds)
```
#Before Shifting
indexedDataset_logscale.head()
#After Shifting
indexedDataset_logscale.shift().head()
dlds=indexedDataset_logscale-indexedDataset_logscale.shift()
dlds.dropna(inplace=True)
test_stationary(dlds)
```
# Decomposition
```
from statsmodels.tsa.seasonal import seasonal_decompose
decompostion= seasonal_decompose(indexedDataset_logscale,freq=10)
trend=decompostion.trend
seasonal=decompostion.seasonal
residual=decompostion.resid
plt.subplot(411)
plt.plot(indexedDataset_logscale,label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend,label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonal')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual,label='Residual')
plt.legend(loc='best')
plt.tight_layout() #To Show Multiple Grpahs In One Output, Use plt.subplot(abc)
```
- Here trend, seasonality are separated out from data and we can model the residuals. Lets check stationarity of residuals:
```
decomposedlogdata=residual
decomposedlogdata.dropna(inplace=True)
test_stationary(decomposedlogdata)
```
# Forecasting a Time Series
- ARIMA stands for Auto-Regressive Integrated Moving Averages. The ARIMA forecasting for a stationary time series is nothing but a linear (like a linear regression) equation. The predictors depend on the parameters (p,d,q) of the ARIMA model:
- - Number of AR (Auto-Regressive) terms (p): AR terms are just lags of dependent variable. For instance if p is 5, the predictors for x(t) will be x(t-1)….x(t-5).
- - Number of MA (Moving Average) terms (q): MA terms are lagged forecast errors in prediction equation. For instance if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at ith instant and actual value.
- - Number of Differences (d): These are the number of nonseasonal differences, i.e. in this case we took the first order difference. So either we can pass that variable and put d=0 or pass the original variable and put d=1. Both will generate same results.
- An importance concern here is how to determine the value of ‘p’ and ‘q’. We use two plots to determine these numbers.
- - Autocorrelation Function (ACF): It is a measure of the correlation between the the TS with a lagged version of itself-. For instance at lag 5, ACF would compare series at time instant ‘t1’…’t2’ with series at instant ‘t1-5’…’t2-5’ (t1-5 and t2 being end points).
- - Partial Autocorrelation Function (PACF): This measures the correlation between the TS with a lagged version of itself but after eliminating the variations already explained by the intervening comparisons. Eg at lag 5, it will check the correlation but remove the effects already explained by lags 1 to 4.
## ACF & PACF Plots
```
from statsmodels.tsa.stattools import acf,pacf
lag_acf=acf(dlds,nlags=20)
lag_pacf=pacf(dlds,nlags=20,method='ols')
plt.subplot(121)
plt.plot(lag_acf)
plt.axhline(y=0, linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(dlds)),linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(dlds)),linestyle='--',color='gray')
plt.title('AutoCorrelation Function')
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0, linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(dlds)),linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(dlds)),linestyle='--',color='gray')
plt.title('PartialAutoCorrelation Function')
plt.tight_layout()
```
- In this plot, the two dotted lines on either sides of 0 are the confidence interevals. These can be used to determine the ‘p’ and ‘q’ values as:
- - p – The lag value where the PACF chart crosses the upper confidence interval for the first time. If we notice closely, in this case p=2.
- - q – The lag value where the ACF chart crosses the upper confidence interval for the first time. If we notice closely, in this case q=2.
```
from statsmodels.tsa.arima_model import ARIMA
model=ARIMA(indexedDataset_logscale,order=(5,1,0))
results_AR=model.fit(disp=-1)
plt.plot(dlds)
plt.plot(results_AR.fittedvalues,color='red')
plt.title('RSS: %.4f'%sum((results_AR.fittedvalues-dlds['MONSOON'])**2))
print('Plotting AR Model')
model = ARIMA(indexedDataset_logscale, order=(0, 1, 2)) #0,1,2
results_MA = model.fit(disp=-1)
plt.plot(dlds)
plt.plot(results_MA.fittedvalues, color='red')
plt.title('RSS: %.4f'%sum((results_MA.fittedvalues-dlds['MONSOON'])**2))
print('Plotting MA Model')
import itertools
a={}
p=d=q=range(0,6)
pdq=list(itertools.product(p,d,q))
for i in pdq:
try:
model_arima=(ARIMA(indexedDataset_logscale,order=i)).fit()
a[i]=model_arima.aic
except:
continue
# min(a, key=a.get)
RSS=[]
RSS1=[]
for i in a.keys():
model = ARIMA(indexedDataset_logscale, order=i)
results_ARIMA = model.fit(disp=-1)
RSS.append(sum((results_ARIMA.fittedvalues-dlds['MONSOON'])**2))
for i in range(0,len(RSS)):
if(~np.isnan(RSS[i])):
RSS1.append(RSS[i])
min(RSS1)
model = ARIMA(indexedDataset_logscale, order=(5, 1, 2))
results_ARIMA = model.fit(disp=-1)
plt.plot(dlds)
plt.plot(results_ARIMA.fittedvalues, color='red')
plt.title('RSS: %.4f'%sum((results_ARIMA.fittedvalues-dlds['MONSOON'])**2))
print('Plotting Combined Model')
```
# Taking it back to original scale from residual scale
```
#storing the predicted results as a separate series
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
predictions_ARIMA_diff.head()
```
- Notice that these start from ‘1949-02-01’ and not the first month. Why? This is because we took a lag by 1 and first element doesn’t have anything before it to subtract from. The way to convert the differencing to log scale is to add these differences consecutively to the base number. An easy way to do it is to first determine the cumulative sum at index and then add it to the base number.
```
#convert to cummuative sum
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA_diff_cumsum
predictions_ARIMA_log = pd.Series(indexedDataset_logscale['MONSOON'].ix[0], index=indexedDataset_logscale.index)
predictions_ARIMA_log
predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)
predictions_ARIMA_log
```
- Here the first element is base number itself and from there on the values cumulatively added.
```
#Last step is to take the exponent and compare with the original series.
predictions_ARIMA = np.exp(predictions_ARIMA_log)
plt.plot(indexedDataset)
plt.plot(predictions_ARIMA)
plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-indexedDataset['MONSOON'])**2)/len(indexedDataset)))
```
- Finally we have a forecast at the original scale.
```
results_ARIMA.plot_predict(1,26)
#start = !st month
#end = 10yrs forcasting = 144+12*10 = 264th month
#Two models corresponds to AR & MA
x=results_ARIMA.forecast(steps=5)
print(x)
#values in residual equivalent
for i in range(0,5):
print(x[0][i],end='')
print('\t',x[1][i],end='')
print('\t',x[2][i])
np.exp(results_ARIMA.forecast(steps=5)[0])
predictions_ARIMA_diff = pd.Series(results_ARIMA.forecast(steps=5)[0], copy=True)
predictions_ARIMA_diff.head()
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA_diff_cumsum.head()
predictions_ARIMA_log=[]
for i in range(0,len(predictions_ARIMA_diff_cumsum)):
predictions_ARIMA_log.append(predictions_ARIMA_diff_cumsum[i]+3.411478)
predictions_ARIMA_log
#Last step is to take the exponent and compare with the original series.
predictions_ARIMA = np.exp(predictions_ARIMA_log)
plt.subplot(121)
plt.plot(indexedDataset)
plt.subplot(122)
plt.plot(predictions_ARIMA)
plt.tight_layout()
# plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-indexedDataset['MONSOON'])**2)/len(indexedDataset)))
np.exp(predictions_ARIMA_log)
```
| github_jupyter |
# 2 Dead reckoning
*Dead reckoning* is a means of navigation that does not rely on external observations. Instead, a robot’s position is estimated by summing its incremental movements relative to a known starting point.
Estimates of the distance traversed are usually obtained from measuring how many times the wheels have turned, and how many times they have turned in relation to each other. For example, the wheels of the robot could be attached to an odometer, similar to the device that records the mileage of a car.
In RoboLab we will calculate the position of a robot from how long it moves in a straight line or rotates about its centre. We will assume that the length of time for which the motors are switched on is directly related to the distance travelled by the wheels.
*By design, the simulator does not provide the robot with access to any magical GPS-style service. In principle, we could create a magical ‘simulated-GPS’ sensor that would allow the robot to identify its location from the simulator’s point of view; but in the real world we can’t always guarantee that external location services are available. For example, GPS doesn’t work indoors or underground, or even in many cities where line-of-sight access to four or more GPS satellites is not available.*
*Furthermore, the robot cannot magically teleport itself to a new location from within a program. Only the magics can teleport the robot to a specific location...*
*Although the simulator is omniscient and does keep track of where the robot is, the robot must figure out for itself where it is based on things like how far the motors have turned, or from its own sensor readings (ultrasound-based distance to a target, for example, or gyroscope heading); you will learn how to make use of sensors for navigation in later notebooks.*
## 2.1 Activity – Dead reckoning
An environment for the simulated robot to navigate is shown below, based on the 2018 First Lego League ‘Into Orbit’ challenge.
The idea is that the robot must get to the target satellite from its original starting point by avoiding the obstacles in its direct path.

The [First Lego League (FLL)](https://www.firstlegoleague.org/) is a friendly international youth-based robot competition in which teams compete at national and international level on an annual basis. School teams are often coached by volunteers. In the UK, volunteers often coach teams under the auspices of the [STEM Ambassadors Scheme](https://www.stem.org.uk/stem-ambassadors). Many companies run volunteering schemes that allow employees to volunteer their skills in company time using schemes such as STEM Ambassadors.
Load in the simulator in the usual way:
```
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
```
To navigate the environment, we will use a small robot configuration within the simulator. The robot configuration can be set via the simulator user interface, or by passing the `-r Small_Robot` parameter setting in the simulator magic.
The following program should drive the robot from its starting point to the target, whilst avoiding the obstacles. We define the obstacle as being avoided if it is not crossed by the robot’s *pen down* trail.
Load the *FLL_2018_Into_Orbit* background into the simulator. Run the following code cell to download the program to the simulator and then, with the *pen down*, run the program in the simulator.
Remember, you can use the `-P / --pencolor` flag to change the pen colour and the `-C / --clear` option to clear the pen trace.
Does the robot reach the target satellite without encountering any obstacles?
```
%%sim_magic_preloaded -b FLL_2018_Into_Orbit -p -r Small_Robot
# Turn on the spot to the right
tank_turn.on_for_rotations(100, SpeedPercent(70), 1.7 )
# Go forwards
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 20)
# Slight graceful turn to left
tank_drive.on_for_rotations(SpeedPercent(35), SpeedPercent(50), 8.5)
# Turn on the spot to the left
tank_turn.on_for_rotations(-100, SpeedPercent(75), 0.8)
# Forwards a bit
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 2.0)
# Turn on the spot a bit more to the right
tank_turn.on_for_rotations(100, SpeedPercent(60), 0.4 )
# Go forwards a bit more and dock on the satellite
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 1.0)
say("Hopefully I have docked with the satellite...")
```
*Add your notes on how well the simulated robot performed the task here.*
To set the speeds and times, I used a bit of trial and error.
If the route had been much more complex, then I would have been tempted to comment out the steps up I had already run and add new steps that would be applied from wherever the robot was currently located.
Note that the robot could have taken other routes to get to the satellite – I just thought I should avoid the asteroid!
### 2.1.1 Using motor tacho counts to identify how far the robot has travelled
In the above example, the motors were turned on for a specific amount of time to move the robot on each leg of its journey. This would not be an appropriate control strategy if we wanted to collect sensor data along the route, because the `on_for_X()` motor commands are blocking commands.
However, suppose we replaced the forward driving `tank_drive.on_for_rotations()` commands with commands of the form:
```python
from time import sleep
tank_drive.on(SPEED)
while int(tank_drive.left_motor.position) < DISTANCE:
# We need something that takes a finite time
# to run in the loop or the program will hang
sleep(0.1)
```
Now we could drive the robot forwards until the motor tacho count exceeds a specified `DISTANCE` and at the same time, optionally include additional commands, such as sensor data-logging commands, inside the body of each `while` loop.
*As well as `tank_drive.left_motor.position` we can also refer to `tank_drive.right_motor.position`. Also note that these values are returned as strings and need to be cast to integers for numerical comparisons.*
### 2.1.2 Activity – Dead reckoning over distances (optional)
Use the `.left_motor.position` and/or `.right_motor.position` motor tacho counts in a program that allows the robot to navigate from its home base to the satellite rendezvous.
*Your design notes here.*
```
# YOUR CODE HERE
```
*Your notes and observations here.*
## 2.2 Challenge – Reaching the moon base
In the following code cell, write a program to move the simulated robot from its location servicing the satellite to the moon base identified as the circular area marked on the moon in the top right-hand corner of the simulated world.
In the simulator, set the robot’s *x* location to `1250` and *y* location to `450`.
Use the following code cell to write your own dead-reckoning program to drive the robot to the moon base at location `(2150, 950)`.
```
%%sim_magic_preloaded
# YOUR CODE HERE
```
## 2.3 Dead reckoning with noise
The robot traverses its path using timing information for dead reckoning. In principle, if the simulated robot had a map then it could calculate all the distances and directions for itself, convert these to times, and dead reckon its way to the target. However, there is a problem with dead reckoning: *noise*.
In many physical systems, a perfect intended behaviour is subject to *noise* – random perturbations that arise within the system as time goes on as a side effect of its operation. In a robot, noise might arise in the behaviour of the motors, the transmission or the wheels. The result is that the robot does not execute its motion without error. We can model noise effects in the mobility system of our robot by adding a small amount of noise to the motor speeds as the simulator runs. This noise component may speed up or slow down the speed of each motor, in a random way. As with real systems, the noise represents slight random deviations from the theoretical, ideal behaviour.
For the following experiment, create a new, empty background cleared of pen traces.
```
%sim_magic -b Empty_Map --clear
```
Run the following code cell to download the program to the simulator using an empty background (select the *Empty_Map*) and the *Pen Down* mode selected. Also reset the initial location of the robot to an *x* value of `150` and *y* value of `400`.
Run the program in the simulator and observe what happens.
```
%%sim_magic_preloaded -b Empty_Map -p -x 150 -y 400 -r Small_Robot --noisecontrols
tank_drive.on_for_rotations(SpeedPercent(30),
SpeedPercent(30), 10)
```
*Record your observations here describing what happens when you run the program.*
When you run the program, you should see the robot drive forwards a short way in a straight line, leaving a straight line trail behind it.
Reset the location of the robot. Within the simulator, use the *Noise controls* to increase the *Wheel noise* value from zero by dragging the slider to the right a little way. Alternatively, add noise in the range `0...500` using the `--motornoise / -M` magic flag.
Run the program in the simulator again.
You should notice this time that the robot does not travel in a straight line. Instead, it drifts from side to side, although possibly to one side of the line.
Move the robot back to the start position, or rerun the previous code cell to do so, and run the program in the simulator again. This time, you should see it follows yet another different path.
Depending on how severe the noise setting is, the robot will travel closer (low noise) to the original straight line, or follow an ever-more erratic path (high noise).
*Record your own notes and observations here describing the behaviour of the robot for different levels of motor noise.*
Clear the pen traces from the simulator by running the following line magic:
```
%sim_magic -C
```
Now run the original satellite-finding dead-reckoning program again, using the *FLL_2018_Into_Orbit* background, but in the presence of *Wheel noise*. How well does it perform this time compared to previously?
```
%%sim_magic_preloaded -b FLL_2018_Into_Orbit -p -r Small_Robot
# Turn on the spot to the right
tank_turn.on_for_rotations(100, SpeedPercent(70), 1.7 )
# Go forwards
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 20)
# Slight graceful turn to left
tank_drive.on_for_rotations(SpeedPercent(35), SpeedPercent(50), 8.5)
# Turn on the spot to the left
tank_turn.on_for_rotations(-100, SpeedPercent(75), 0.8)
# Forwards a bit
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 2.0)
# Turn on the spot a bit more to the right
tank_turn.on_for_rotations(100, SpeedPercent(60), 0.4 )
# Go forwards a bit more and dock on the satellite
tank_drive.on_for_rotations(SpeedPercent(30), SpeedPercent(30), 1.0)
say("Did I avoid crashing and dock with the satellite?")
```
Reset the robot to its original location and run the program in the simulator again. Even with the same level of motor noise as on the previous run, how does the path followed by the robot this time compare with the previous run?
*Add your own notes and observations here.*
## 2.4 Summary
In this notebook, you have seen how we can use dead reckoning to move the robot along a specified path. Using the robot’s motor speeds and by monitoring how long the motors are switched on for, we can use distance–time calculations to estimate the robot’s path. If we add in accurate measurements regarding how far we want the robot to travel, and in what direction, this provides one way of helping the robot to navigate to a particular waypoint.
However, in the presence of noise, this approach is likely to be very unreliable: whilst the robot may think it is following one path, as determined by how long it has turned its motors on, and at what speed, it may in fact be following another path. In a real robot, the noise may be introduced in all sorts of ways, including from friction in the motor bearings, the time taken to accelerate from a standing start and get up to speed, and loss of traction effects such as wheel spin and slip as the robot’s wheels turn.
Whilst in some cases it may reach the target safely, in others it may end somewhere completely different, or encounter an obstacle along the way.
<!-- JD: should we say what's coming up in the next notebook? -->
| github_jupyter |
## What this code does
In short, it is a reverse meme search, that identifies the source of the meme. It takes an image copypasta, extracts the individual *subimages* and compares it with a database of pictures (the database should be made up of copypastas, which is in TODO)
### TODO
### Clean up the code
There are many repetitive import statements. <br\>
The code is saving the picture as file so that you can load it into model. <br\>
Anything that you cannot explain in this code <br\>
Change VGG16 to Xception (because I can't upgrade both TF and keras for reasons)
#### Feature vector robustness check
To what extent the following transformations affects the feature vector?
- crop (a little, add bounding boxes)
- photoshop - e.g. cropping a face onto a body
- rotate the image (a little, a lot)
- add text (different sizes)
- vandalised - scribbling markers over
- add noise (Gaussian etc)
- compression changes
- recoloring - grey-scale
- picture effects - e.g. twisted picture meme
- special effects - e.g. shining eyes meme
#### Image separation testing
We need to ensure the individual pictures are separated correctly.
- pictures now don't have borders
- pictures are no longer rectangular
- whether does it identify the source of the cropped face
#### Database management
We need to do preprocessing of the database. Currently the feature vector is only calculated when you start this notebook.
Moreover, since the database of copypasta will not be single images, we need to process that aspect as well. From the copypastas we need to identify its subimages and then calculate their feature vector. There also needs to be some way to associate the feature vector and the location of the subimages to the image copypasta, together with its metadata - in a manner that is scalable.
### import imagenet model
```
%run image_database_helper.ipynb
model = init_model()
```
### making a list of all the files
```
!rm 'imgs/.DS_Store'
images = findfiles("new/")
print(len(images))
```
### Processing pictures
```
from PIL import Image
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
import cv2
import csv
fieldnames = ['img_file_name',
'number_of_subimages',
'subimage_number',
'x',
'y',
'w',
'h',
'feature_vector']
import os
if not os.path.exists('index_subimage.csv'):
with open('index_subimage.csv', 'w') as csvfile:
db = csv.DictWriter(csvfile, fieldnames=fieldnames)
db.writeheader()
import subprocess
import csv
for img_name in images:
path_image_to_analyse = "./new/"+img_name
print(img_name)
img = cv2.imread(path_image_to_analyse)
output_boxes = get_bounding_boxes(img)
for i, box in enumerate(output_boxes):
[x,y,w,h] = box
output_img = np.array(img[y:y+h, x:x+w])
cv2.imwrite("temp.jpg",output_img)
feature_vector = calc_feature_vector(model, "temp.jpg")
dict_to_write = {'img_file_name':img_name,
'number_of_subimages':len(output_boxes),
'subimage_number':i,
'x':x,
'y':y,
'w':w,
'h':h,
'feature_vector':feature_vector}
with open('index_subimage.csv', 'a') as csvfile:
db = csv.DictWriter(csvfile, fieldnames=fieldnames)
db.writerow(dict_to_write)
subprocess.run("mv ./new/{} ./database/{}".format(img_name,img_name),shell=True)
!cp ./database/* ./new/
```
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will use your trained model to generate captions for images in the test dataset.
This notebook **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Get Data Loader for Test Dataset
- [Step 2](#step2): Load Trained Models
- [Step 3](#step3): Finish the Sampler
- [Step 4](#step4): Clean up Captions
- [Step 5](#step5): Generate Predictions!
<a id='step1'></a>
## Step 1: Get Data Loader for Test Dataset
Before running the code cell below, define the transform in `transform_test` that you would like to use to pre-process the test images.
Make sure that the transform that you define here agrees with the transform that you used to pre-process the training images (in **2_Training.ipynb**). For instance, if you normalized the training images, you should also apply the same normalization procedure to the test images.
```
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from torchvision import transforms
# TODO #1: Define a transform to pre-process the testing images.
transform_test = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Create the data loader.
data_loader = get_loader(transform=transform_test,
mode='test')
```
Run the code cell below to visualize an example test image, before pre-processing is applied.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Obtain sample image before and after pre-processing.
orig_image, image = next(iter(data_loader))
# Visualize sample image, before pre-processing.
plt.imshow(np.squeeze(orig_image))
plt.title('example image')
plt.show()
```
<a id='step2'></a>
## Step 2: Load Trained Models
In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
```
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
Before running the code cell below, complete the following tasks.
### Task #1
In the next code cell, you will load the trained encoder and decoder from the previous notebook (**2_Training.ipynb**). To accomplish this, you must specify the names of the saved encoder and decoder files in the `models/` folder (e.g., these names should be `encoder-5.pkl` and `decoder-5.pkl`, if you trained the model for 5 epochs and saved the weights after each epoch).
### Task #2
Plug in both the embedding size and the size of the hidden layer of the decoder corresponding to the selected pickle file in `decoder_file`.
```
# Watch for any changes in model.py, and re-load it automatically.
% load_ext autoreload
% autoreload 2
import os
import torch
from model import EncoderCNN, DecoderRNN
# TODO #2: Specify the saved models to load.
encoder_file = "encoder-1.pkl"
decoder_file = "decoder-1.pkl"
# TODO #3: Select appropriate values for the Python variables below.
embed_size = 256 #512 #300
hidden_size = 512
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder, and set each to inference mode.
encoder = EncoderCNN(embed_size)
encoder.eval()
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
decoder.eval()
# Load the trained weights.
encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))
decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))
# Move models to GPU if CUDA is available.
encoder.to(device)
decoder.to(device)
```
<a id='step3'></a>
## Step 3: Finish the Sampler
Before executing the next code cell, you must write the `sample` method in the `DecoderRNN` class in **model.py**. This method should accept as input a PyTorch tensor `features` containing the embedded input features corresponding to a single image.
It should return as output a Python list `output`, indicating the predicted sentence. `output[i]` is a nonnegative integer that identifies the predicted `i`-th token in the sentence. The correspondence between integers and tokens can be explored by examining either `data_loader.dataset.vocab.word2idx` (or `data_loader.dataset.vocab.idx2word`).
After implementing the `sample` method, run the code cell below. If the cell returns an assertion error, then please follow the instructions to modify your code before proceeding. Do **not** modify the code in the cell below.
```
# Move image Pytorch Tensor to GPU if CUDA is available.
image = image.to(device)
# Obtain the embedded image features.
features = encoder(image).unsqueeze(1)
# Pass the embedded image features through the model to get a predicted caption.
output = decoder.sample(features)
print('example output:', output)
assert (type(output)==list), "Output needs to be a Python list"
assert all([type(x)==int for x in output]), "Output should be a list of integers."
assert all([x in data_loader.dataset.vocab.idx2word for x in output]), "Each entry in the output needs to correspond to an integer that indicates a token in the vocabulary."
```
<a id='step4'></a>
## Step 4: Clean up the Captions
In the code cell below, complete the `clean_sentence` function. It should take a list of integers (corresponding to the variable `output` in **Step 3**) as input and return the corresponding predicted sentence (as a single Python string).
```
# TODO #4: Complete the function.
def clean_sentence(output):
seperator = " "
word_list = [];
for word_index in output:
if word_index not in [0,2]: # 0: '<start>', 1: '<end>', 2: '<unk>', 18: '.'
if word_index == 1:
break
word = data_loader.dataset.vocab.idx2word[word_index]
word_list.append(word)
sentence = seperator.join(word_list)
return sentence
```
After completing the `clean_sentence` function above, run the code cell below. If the cell returns an assertion error, then please follow the instructions to modify your code before proceeding.
```
sentence = clean_sentence(output)
print('example sentence:', sentence)
assert type(sentence)==str, 'Sentence needs to be a Python string!'
```
<a id='step5'></a>
## Step 5: Generate Predictions!
In the code cell below, we have written a function (`get_prediction`) that you can use to use to loop over images in the test dataset and print your model's predicted caption.
```
def get_prediction():
orig_image, image = next(iter(data_loader))
plt.imshow(np.squeeze(orig_image))
plt.title('Sample Image')
plt.show()
image = image.to(device)
features = encoder(image).unsqueeze(1)
output = decoder.sample(features)
sentence = clean_sentence(output)
print(sentence)
```
Run the code cell below (multiple times, if you like!) to test how this function works.
```
get_prediction()
```
As the last task in this project, you will loop over the images until you find four image-caption pairs of interest:
- Two should include image-caption pairs that show instances when the model performed well.
- Two should highlight image-caption pairs that highlight instances where the model did not perform well.
Use the four code cells below to complete this task.
### The model performed well!
Use the next two code cells to loop over captions. Save the notebook when you encounter two images with relatively accurate captions.
```
get_prediction()
get_prediction()
```
### The model could have performed better ...
Use the next two code cells to loop over captions. Save the notebook when you encounter two images with relatively inaccurate captions.
```
get_prediction()
get_prediction()
```
| github_jupyter |
# Purpose: To run the full segmentation using the best scored method from 2_compare_auto_to_manual_threshold
Date Created: January 7, 2022
Dates Edited: January 26, 2022 - changed the ogd severity study to be the otsu data as the yen data did not run on all samples.
*Step 1: Import Necessary Packages*
```
# import major packages
import numpy as np
import matplotlib.pyplot as plt
import skimage
import PIL as Image
import os
import pandas as pd
# import specific package functions
from skimage.filters import threshold_otsu
from skimage import morphology
from scipy import ndimage
from skimage.measure import label
from skimage import io
from skimage import measure
```
__OGD Severity Study__
```
im_folder_location = '/Users/hhelmbre/Desktop/ogd_severity_undergrad/10_4_21_redownload/'
im_paths = []
files = []
for file in os.listdir(im_folder_location):
if file.endswith(".tif"):
file_name = os.path.join(im_folder_location, file)
files.append(file)
im_paths.append(file_name)
files
properties_list = ('area', 'bbox_area', 'centroid', 'convex_area',
'eccentricity', 'equivalent_diameter', 'euler_number',
'extent', 'filled_area', 'major_axis_length',
'minor_axis_length', 'orientation', 'perimeter', 'solidity')
source_dir = '/Users/hhelmbre/Desktop/microfiber/ogd_severity_segmentations/'
j = 0
for image in im_paths:
short_im_name = image.rsplit('/', 1)
short_im_name = short_im_name[1]
im = io.imread(image)
microglia_im = im[:,:,1]
#otsu threshold
thresh_otsu = skimage.filters.threshold_otsu(microglia_im)
binary_otsu = microglia_im > thresh_otsu
new_binary_otsu = morphology.remove_small_objects(binary_otsu, min_size=71)
new_binary_otsu = ndimage.binary_fill_holes(new_binary_otsu)
label_image = measure.label(new_binary_otsu)
props = measure.regionprops_table(label_image, properties=(properties_list))
np.save(str(source_dir + short_im_name[:-4] + '_otsu_thresh'), new_binary_otsu)
if j == 0:
df = pd.DataFrame(props)
df['filename'] = image
else:
df2 = pd.DataFrame(props)
df2['filename'] = image
df = df.append(df2)
j = 1
df['circularity'] = 4*np.pi*df.area/df.perimeter**2
df['aspect_ratio'] = df.major_axis_length/df.minor_axis_length
df
df.to_csv('/Users/hhelmbre/Desktop/microfiber/ogd_severity_study_features_otsu.csv' )
%load_ext watermark
%watermark -v -m -p numpy,pandas,scipy,skimage,matplotlib,wget
%watermark -u -n -t -z
```
| github_jupyter |
```
# TensorFlow pix2pix implementation
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import os
import time
from matplotlib import pyplot as plt
from IPython import display
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
PATH = "/Volumes/Data/projects/cs230/Project/RenderGAN/pix2pix/data/train_data/10-10000/AB/"
BUFFER_SIZE = 400
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
def load(image_file):
image = tf.io.read_file(image_file)
image = tf.image.decode_png(image)
w = tf.shape(image)[1]
w = w // 2
real_image = image[:, w:, :]
input_image = image[:, :w, :]
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)
return input_image, real_image
inp, re = load(PATH+'train/8.png')
# casting to int for matplotlib to show the image
plt.figure()
plt.imshow(inp/255.0)
plt.figure()
plt.imshow(re/255.0)
def resize(input_image, real_image, height, width):
input_image = tf.image.resize(input_image, [height, width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
real_image = tf.image.resize(real_image, [height, width],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
return input_image, real_image
def random_crop(input_image, real_image):
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.image.random_crop(
stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
return cropped_image[0], cropped_image[1]
# normalizing the images to [-1, 1]
def normalize(input_image, real_image):
input_image = (input_image / 127.5) - 1
real_image = (real_image / 127.5) - 1
return input_image, real_image
@tf.function()
def random_jitter(input_image, real_image):
# resizing to 286 x 286 x 3
input_image, real_image = resize(input_image, real_image, 286, 286)
# randomly cropping to 256 x 256 x 3
input_image, real_image = random_crop(input_image, real_image)
if tf.random.uniform(()) > 0.5:
# random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
return input_image, real_image
plt.figure(figsize=(6, 6))
for i in range(4):
rj_inp, rj_re = random_jitter(inp, re)
plt.subplot(2, 2, i+1)
plt.imshow(rj_inp/255.0)
plt.axis('off')
plt.show()
def load_image_train(image_file):
input_image, real_image = load(image_file)
input_image, real_image = random_jitter(input_image, real_image)
input_image, real_image = normalize(input_image, real_image)
return input_image, real_image
def load_image_test(image_file):
input_image, real_image = load(image_file)
input_image, real_image = resize(input_image, real_image,
IMG_HEIGHT, IMG_WIDTH)
input_image, real_image = normalize(input_image, real_image)
return input_image, real_image
train_dataset = tf.data.Dataset.list_files(PATH+'train/*.png')
train_dataset = train_dataset.map(load_image_train,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.list_files(PATH+'test/*.png')
test_dataset = test_dataset.map(load_image_test)
test_dataset = test_dataset.batch(BATCH_SIZE)
OUTPUT_CHANNELS = 3
def downsample(filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
kernel_initializer=initializer, use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU())
return result
down_model = downsample(3, 4)
down_result = down_model(tf.expand_dims(inp, 0))
print (down_result.shape)
def upsample(filters, size, apply_dropout=False):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False))
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result
up_model = upsample(3, 4)
up_result = up_model(down_result)
print (up_result.shape)
def Generator():
inputs = tf.keras.layers.Input(shape=[256,256,3])
down_stack = [
downsample(64, 4, apply_batchnorm=False), # (bs, 128, 128, 64)
downsample(128, 4), # (bs, 64, 64, 128)
downsample(256, 4), # (bs, 32, 32, 256)
downsample(512, 4), # (bs, 16, 16, 512)
downsample(512, 4), # (bs, 8, 8, 512)
downsample(512, 4), # (bs, 4, 4, 512)
downsample(512, 4), # (bs, 2, 2, 512)
downsample(512, 4), # (bs, 1, 1, 512)
]
up_stack = [
upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024)
upsample(512, 4), # (bs, 16, 16, 1024)
upsample(256, 4), # (bs, 32, 32, 512)
upsample(128, 4), # (bs, 64, 64, 256)
upsample(64, 4), # (bs, 128, 128, 128)
]
initializer = tf.random_normal_initializer(0., 0.02)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh') # (bs, 256, 256, 3)
x = inputs
# Downsampling through the model
skips = []
for down in down_stack:
x = down(x)
skips.append(x)
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
x = tf.keras.layers.Concatenate()([x, skip])
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
gen_output = generator(inp[tf.newaxis,...], training=False)
plt.imshow(gen_output[0,...])
LAMBDA = 100
def generator_loss(disc_generated_output, gen_output, target):
gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)
# mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss, gan_loss, l1_loss
def Discriminator():
initializer = tf.random_normal_initializer(0., 0.02)
inp = tf.keras.layers.Input(shape=[256, 256, 3], name='input_image')
tar = tf.keras.layers.Input(shape=[256, 256, 3], name='target_image')
x = tf.keras.layers.concatenate([inp, tar]) # (bs, 256, 256, channels*2)
down1 = downsample(64, 4, False)(x) # (bs, 128, 128, 64)
down2 = downsample(128, 4)(down1) # (bs, 64, 64, 128)
down3 = downsample(256, 4)(down2) # (bs, 32, 32, 256)
zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3) # (bs, 34, 34, 256)
conv = tf.keras.layers.Conv2D(512, 4, strides=1,
kernel_initializer=initializer,
use_bias=False)(zero_pad1) # (bs, 31, 31, 512)
batchnorm1 = tf.keras.layers.BatchNormalization()(conv)
leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1)
zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (bs, 33, 33, 512)
last = tf.keras.layers.Conv2D(1, 4, strides=1,
kernel_initializer=initializer)(zero_pad2) # (bs, 30, 30, 1)
return tf.keras.Model(inputs=[inp, tar], outputs=last)
discriminator = Discriminator()
tf.keras.utils.plot_model(discriminator, show_shapes=True, dpi=64)
disc_out = discriminator([inp[tf.newaxis,...], gen_output], training=False)
plt.imshow(disc_out[0,...,-1], vmin=-20, vmax=20, cmap='RdBu_r')
plt.colorbar()
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(disc_real_output, disc_generated_output):
real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output)
generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
def generate_images(model, test_input, tar):
prediction = model(test_input, training=True)
plt.figure(figsize=(15,15))
display_list = [test_input[0], tar[0], prediction[0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
for example_input, example_target in test_dataset.take(1):
generate_images(generator, example_input, example_target)
EPOCHS = 10
import datetime
log_dir="logs/"
summary_writer = tf.summary.create_file_writer(
log_dir + "fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
@tf.function
def train_step(input_image, target, epoch):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator(input_image, training=True)
disc_real_output = discriminator([input_image, target], training=True)
disc_generated_output = discriminator([input_image, gen_output], training=True)
gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(gen_total_loss,
generator.trainable_variables)
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_gradients,
generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.trainable_variables))
with summary_writer.as_default():
tf.summary.scalar('gen_total_loss', gen_total_loss, step=epoch)
tf.summary.scalar('gen_gan_loss', gen_gan_loss, step=epoch)
tf.summary.scalar('gen_l1_loss', gen_l1_loss, step=epoch)
tf.summary.scalar('disc_loss', disc_loss, step=epoch)
def fit(train_ds, epochs, test_ds):
for epoch in range(epochs):
start = time.time()
display.clear_output(wait=True)
for example_input, example_target in test_ds.take(1):
generate_images(generator, example_input, example_target)
print("Epoch: ", epoch)
# Train
for n, (input_image, target) in train_ds.enumerate():
print('.', end='')
if (n+1) % 100 == 0:
print()
train_step(input_image, target, epoch)
print()
# saving (checkpoint) the model every 20 epochs
if (epoch + 1) % 20 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
checkpoint.save(file_prefix = checkpoint_prefix)
%load_ext tensorboard
%tensorboard --logdir {log_dir}
fit(train_dataset, EPOCHS, test_dataset)
!ls {checkpoint_dir}
```
| github_jupyter |
# Basic Workflow
```
# Always have your imports at the top
import pandas as pd
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import TransformerMixin
from hashlib import sha1 # just for grading purposes
import json # just for grading purposes
def _hash(obj, salt='none'):
if type(obj) is not str:
obj = json.dumps(obj)
to_encode = obj + salt
return sha1(to_encode.encode()).hexdigest()
```
# Workflow steps
What are the basic workflow steps?
It's incredibly obvious what the steps are since you can see them graded in plain text. However we deem it worth actually making you type each one of the steps and take a moment to think about it and internalize them.
Please do actually type them rather than just copy-pasting as fast as you can. Type it out character by character and internalize.
```
# step_1 = ...
# step_2 = ...
# step_2_a = ...
# step_2_b = ...
# step_2_c = ...
# step_2_d = ...
# step_3 = ...
# step_4 = ...
# step_5 = ...
# YOUR CODE HERE
raise NotImplementedError()
### BEGIN TESTS
assert step_1 == 'Get the data'
assert step_2 == 'Data analysis and preparation'
assert step_2_a == 'Data analysis'
assert step_2_b == 'Dealing with data problems'
assert step_2_c == 'Feature engineering'
assert step_2_d == 'Feature selection'
assert step_3 == 'Train model'
assert step_4 == 'Evaluate results'
assert step_5 == 'Iterate'
### END TESTS
```
# Specific workflow questions
Here are some more specific questions about individual workflow steps.
```
# True or False, it's super easy to gather your dataset in a production environment
# real_world_dataset_gathering_easy = ...
# True or False, it's super easy to gather your dataset in the context of the academy
# academy_dataset_gathering_easy = ...
# True or False, you should try as hard as you can to get the best possible score
# on your test set by iterating until you can't get your test set score any higher
# by any means possible
# test_set_optimization_is_good = ...
# True or False, you should choose one metric by which to evaluate your model and
# never consider using another one
# one_metric_should_rule_them_all = ...
# YOUR CODE HERE
raise NotImplementedError()
### BEGIN TESTS
assert _hash(real_world_dataset_gathering_easy, 'salt1') == '63b5b9a8f2d359e1fc175c3b01b907ef87590484'
assert _hash(academy_dataset_gathering_easy, 'salt2') == 'dd7dee495a153c95d28c7aa95289c0415242f5d8'
assert _hash(test_set_optimization_is_good, 'salt3') == 'f24a294afb4a09f7f9df9ee13eb18e7d341c439d'
assert _hash(one_metric_should_rule_them_all, 'salt4') == '2360691a582e4f0fbefa238ab6ced1cbfbfe8a50'
### END TESTS
```
# scikit pipelines
Make a simple pipeline that
1. Drops all columns that start with the string `evil`
1. Fills all nulls with the median
```
# Create a pipeline step called RemoveEvilColumns the removed any
# column whose name starts with the string 'evil'
# YOUR CODE HERE
raise NotImplementedError()
# Create an pipeline using make_pipeline
# 1. removes evil columns
# 2. imputes with the mean
# 3. has a random forest classifier as the last step
# YOUR CODE HERE
raise NotImplementedError()
X = pd.DataFrame({
'evil_1': ['a'] * 100,
'evil_2': ['b'] * 100,
'not_so_evil': list(range(0, 100))
})
y = pd.Series([x % 2 for x in range(0, 100)])
pipeline.fit(X, y)
### BEGIN TESTS
assert pipeline.steps[0][0] == 'removeevilcolumns', pipeline.steps[0][0]
assert pipeline.steps[1][0] == 'simpleimputer', pipeline.steps[1][0]
assert pipeline.steps[2][0] == 'randomforestclassifier', pipeline.steps[2][0]
### END TESTS
```
| github_jupyter |
# Lab 11: MLP -- exercise
# Understanding the training loop
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
```
### Download the data and print the sizes
```
train_data=torch.load('../data/fashion-mnist/train_data.pt')
print(train_data.size())
train_label=torch.load('../data/fashion-mnist/train_label.pt')
print(train_label.size())
test_data=torch.load('../data/fashion-mnist/test_data.pt')
print(test_data.size())
```
### Make a ONE layer net class. The network output are the scores! No softmax needed! You have only one line to write in the forward function
```
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
self.linear_layer = nn.Linear(input_size, output_size, bias=False)# complete here
def forward(self, x):
scores = self.linear_layer(x) # complete here
return scores
```
### Build the net
```
net= one_layer_net(784,10)# complete here
print(net)
```
### Choose the criterion and the optimizer: use the CHEAT SHEET to see the correct syntax.
### Remember that the optimizer need to have access to the parameters of the network (net.parameters()).
### Set the batchize and learning rate to be:
### batchize = 50
### learning rate = 0.01
```
# make the criterion
criterion = nn.CrossEntropyLoss()# complete here
# make the SGD optimizer.
optimizer=torch.optim.SGD(net.parameters(), lr=0.01) #complete here )
# set up the batch size
bs=50
```
### Complete the training loop
```
for iter in range(1,5000):
# Set dL/dU, dL/dV, dL/dW to be filled with zeros
optimizer.zero_grad()
# create a minibatch
indices = torch.LongTensor(bs).random_(0,60000)
minibatch_data = train_data[indices]
minibatch_label = train_label[indices]
# reshape the minibatch
inputs = minibatch_data.view(bs, 784)
# tell Pytorch to start tracking all operations that will be done on "inputs"
inputs.requires_grad_()
# forward the minibatch through the net
scores = net(inputs)
# Compute the average of the losses of the data points in the minibatch
loss = criterion(scores, minibatch_label)
# backward pass to compute dL/dU, dL/dV and dL/dW
loss.backward()
# do one step of stochastic gradient descent: U=U-lr(dL/dU), V=V-lr(dL/dU), ...
optimizer.step()
```
### Choose image at random from the test set and see how good/bad are the predictions
```
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
scores = net( im.view(1,784))
probs= F.softmax(scores, dim=1)
utils.show_prob_fashion_mnist(probs)
```
| github_jupyter |
```
# Copyright 2020 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
<img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
# NVTabular demo on Rossmann data - TensorFlow
## Overview
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library.
### Learning objectives
In the previous notebooks ([rossmann-store-sales-preproc.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/rossmann/rossmann-store-sales-preproc.ipynb) and [rossmann-store-sales-feature-engineering.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/rossmann/rossmann-store-sales-feature-engineering.ipynb)), we downloaded, preprocessed and created features for the dataset. Now, we are ready to train our deep learning model on the dataset. In this notebook, we use **TensorFlow** with the NVTabular data loader for TensorFlow to accelereate the training pipeline.
```
import os
import math
import json
import nvtabular as nvt
import glob
```
## Loading NVTabular workflow
This time, we only need to define our data directories. We can load the data schema from the NVTabular workflow.
```
DATA_DIR = os.environ.get("OUTPUT_DATA_DIR", "./data")
INPUT_DATA_DIR = os.environ.get("INPUT_DATA_DIR", "./data")
PREPROCESS_DIR = os.path.join(INPUT_DATA_DIR, 'ross_pre/')
PREPROCESS_DIR_TRAIN = os.path.join(PREPROCESS_DIR, 'train')
PREPROCESS_DIR_VALID = os.path.join(PREPROCESS_DIR, 'valid')
```
What files are available to train on in our directories?
```
!ls $PREPROCESS_DIR
!ls $PREPROCESS_DIR_TRAIN
!ls $PREPROCESS_DIR_VALID
```
We load the data schema and statistic information from `stats.json`. We created the file in the previous notebook `rossmann-store-sales-feature-engineering`.
```
stats = json.load(open(PREPROCESS_DIR + "/stats.json", "r"))
CATEGORICAL_COLUMNS = stats['CATEGORICAL_COLUMNS']
CONTINUOUS_COLUMNS = stats['CONTINUOUS_COLUMNS']
LABEL_COLUMNS = stats['LABEL_COLUMNS']
COLUMNS = CATEGORICAL_COLUMNS + CONTINUOUS_COLUMNS + LABEL_COLUMNS
```
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
```
EMBEDDING_TABLE_SHAPES = stats['EMBEDDING_TABLE_SHAPES']
EMBEDDING_TABLE_SHAPES
```
## Training a Network
Now that our data is preprocessed and saved out, we can leverage `dataset`s to read through the preprocessed parquet files in an online fashion to train neural networks.
We'll start by setting some universal hyperparameters for our model and optimizer. These settings will be the same across all of the frameworks that we explore in the different notebooks.
If you're interested in contributing to NVTabular, feel free to take this challenge on and submit a pull request if successful. 12% RMSPE is achievable using the Novograd optimizer, but we know of no Novograd implementation for TensorFlow that supports sparse gradients, and so we are not including that solution below.
```
EMBEDDING_DROPOUT_RATE = 0.04
DROPOUT_RATES = [0.001, 0.01]
HIDDEN_DIMS = [1000, 500]
BATCH_SIZE = 65536
LEARNING_RATE = 0.001
EPOCHS = 25
# TODO: Calculate on the fly rather than recalling from previous analysis.
MAX_SALES_IN_TRAINING_SET = 38722.0
MAX_LOG_SALES_PREDICTION = 1.2 * math.log(MAX_SALES_IN_TRAINING_SET + 1.0)
TRAIN_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_TRAIN, '*.parquet')))
VALID_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_VALID, '*.parquet')))
```
## TensorFlow
<a id="TensorFlow"></a>
### TensorFlow: Preparing Datasets
`KerasSequenceLoader` wraps a lightweight iterator around a `dataset` object to handle chunking, shuffling, and application of any workflows (which can be applied online as a preprocessing step). For column names, can use either a list of string names or a list of TensorFlow `feature_columns` that will be used to feed the network
```
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# it's too late and TF will have claimed all free GPU memory
os.environ['TF_MEMORY_ALLOCATION'] = "8192" # explicit MB
os.environ['TF_MEMORY_ALLOCATION'] = "0.5" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
# cheap wrapper to keep things some semblance of neat
def make_categorical_embedding_column(name, dictionary_size, embedding_dim):
return tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(name, dictionary_size),
embedding_dim
)
# instantiate our columns
categorical_columns = [
make_categorical_embedding_column(name, *EMBEDDING_TABLE_SHAPES[name]) for
name in CATEGORICAL_COLUMNS
]
continuous_columns = [
tf.feature_column.numeric_column(name, (1,)) for name in CONTINUOUS_COLUMNS
]
# feed them to our datasets
train_dataset = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
feature_columns=categorical_columns+continuous_columns,
batch_size=BATCH_SIZE,
label_names=LABEL_COLUMNS,
shuffle=True,
buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once
)
valid_dataset = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
feature_columns=categorical_columns+continuous_columns,
batch_size=BATCH_SIZE*4,
label_names=LABEL_COLUMNS,
shuffle=False,
buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once
)
```
### TensorFlow: Defining a Model
Using Keras, we can define the layers of our model and their parameters explicitly. Here, for the sake of consistency, we'll mimic fast.ai's [TabularModel](https://docs.fast.ai/tabular.learner.html).
```
# DenseFeatures layer needs a dictionary of {feature_name: input}
categorical_inputs = {}
for column_name in CATEGORICAL_COLUMNS:
categorical_inputs[column_name] = tf.keras.Input(name=column_name, shape=(1,), dtype=tf.int64)
categorical_embedding_layer = tf.keras.layers.DenseFeatures(categorical_columns)
categorical_x = categorical_embedding_layer(categorical_inputs)
categorical_x = tf.keras.layers.Dropout(EMBEDDING_DROPOUT_RATE)(categorical_x)
# Just concatenating continuous, so can use a list
continuous_inputs = []
for column_name in CONTINUOUS_COLUMNS:
continuous_inputs.append(tf.keras.Input(name=column_name, shape=(1,), dtype=tf.float32))
continuous_embedding_layer = tf.keras.layers.Concatenate(axis=1)
continuous_x = continuous_embedding_layer(continuous_inputs)
continuous_x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(continuous_x)
# concatenate and build MLP
x = tf.keras.layers.Concatenate(axis=1)([categorical_x, continuous_x])
for dim, dropout_rate in zip(HIDDEN_DIMS, DROPOUT_RATES):
x = tf.keras.layers.Dense(dim, activation='relu')(x)
x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.Dense(1, activation='linear')(x)
# TODO: Initialize model weights to fix saturation issues.
# For now, we'll just scale the output of our model directly before
# hitting the sigmoid.
x = 0.1 * x
x = MAX_LOG_SALES_PREDICTION * tf.keras.activations.sigmoid(x)
# combine all our inputs into a single list
# (note that you can still use .fit, .predict, etc. on a dict
# that maps input tensor names to input values)
inputs = list(categorical_inputs.values()) + continuous_inputs
tf_model = tf.keras.Model(inputs=inputs, outputs=x)
```
### TensorFlow: Training
```
def rmspe_tf(y_true, y_pred):
# map back into "true" space by undoing transform
y_true = tf.exp(y_true) - 1
y_pred = tf.exp(y_pred) - 1
percent_error = (y_true - y_pred) / y_true
return tf.sqrt(tf.reduce_mean(percent_error**2))
%%time
from time import time
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
tf_model.compile(optimizer, 'mse', metrics=[rmspe_tf])
validation_callback = KerasSequenceValidater(valid_dataset)
start = time()
history = tf_model.fit(
train_dataset,
callbacks=[validation_callback],
epochs=EPOCHS,
)
t_final = time() - start
total_rows = train_dataset.num_rows_processed + valid_dataset.num_rows_processed
print(f"run_time: {t_final} - rows: {total_rows} - epochs: {EPOCHS} - dl_thru: { (EPOCHS * total_rows) / t_final}")
```
| github_jupyter |
## Main points
* Solution should be reasonably simple because the contest is only 24 hours long
* Metric is based on the prediction of clicked pictures one week ahead, so clicks are the most important information
* More recent information is more important
* Only pictures that were shown to a user could be clicked, so pictures popularity is important
* Metric is MAPK@100
* Link https://contest.yandex.ru/contest/12899/problems (Russian)
## Plan
* Build a classic recommending system based on user click history
* Only use recent days of historical data
* Take into consideration projected picture popularity
## Magic constants
### ALS recommending system:
```
# Factors for ALS
factors_count=100
# Last days of click history used
trail_days=14
# number of best candidates generated by ALS
output_candidates_count=2000
# Last days of history with more weight
last_days=1
# Coefficient for additional weight
last_days_weight=4
```
## Popular pictures prediction model:
```
import lightgbm
lightgbm.__version__
popularity_model = lightgbm.LGBMRegressor(seed=0)
heuristic_alpha = 0.2
import datetime
import tqdm
import pandas as pd
from scipy.sparse import coo_matrix
import implicit
implicit.__version__
test_users = pd.read_csv('Blitz/test_users.csv')
data = pd.read_csv('Blitz/train_clicks.csv', parse_dates=['day'])
```
## Split last 7 days to calculate clicks similar to test set
```
train, target_week = (
data[data.day <= datetime.datetime(2019, 3, 17)].copy(),
data[data.day > datetime.datetime(2019, 3, 17)],
)
train.day.nunique(), target_week.day.nunique()
last_date = train.day.max()
train.loc[:, 'delta_days'] = 1 + (last_date - train.day).apply(lambda d: d.days)
last_date = data.day.max()
data.loc[:, 'delta_days'] = 1 + (last_date - data.day).apply(lambda d: d.days)
def picture_features(data):
"""Generating clicks count for every picture in last days"""
days = range(1, 3)
features = []
names = []
for delta_days in days:
features.append(
data[(data.delta_days == delta_days)].groupby(['picture_id'])['user_id'].count()
)
names.append('%s_%d' % ('click', delta_days))
features = pd.concat(features, axis=1).fillna(0)
features.columns = names
features = features.reindex(data.picture_id.unique())
return features.fillna(0)
X = picture_features(train)
X.mean(axis=0)
def clicks_count(data, index):
return data.groupby('picture_id')['user_id'].count().reindex(index).fillna(0)
y = clicks_count(target_week, X.index)
y.shape, y.mean()
```
## Train a model predicting popular pictures next week
```
popularity_model.fit(X, y)
X_test = picture_features(data)
X_test.mean(axis=0)
X_test['p'] = popularity_model.predict(X_test)
X_test.loc[X_test['p'] < 0, 'p'] = 0
X_test['p'].mean()
```
## Generate dict with predicted clicks for every picture
```
# This prediction would be used to correct recommender score
picture = dict(X_test['p'])
```
# Recommender part
## Generate prediction using ALS approach
```
import os
os.environ['OPENBLAS_NUM_THREADS'] = "1"
def als_baseline(
train, test_users,
factors_n, last_days, trail_days, output_candidates_count, last_days_weight
):
train = train[train.delta_days <= trail_days].drop_duplicates([
'user_id', 'picture_id'
])
users = train.user_id
items = train.picture_id
weights = 1 + last_days_weight * (train.delta_days <= last_days)
user_item = coo_matrix((weights, (users, items)))
model = implicit.als.AlternatingLeastSquares(factors=factors_n, iterations=factors_n)
model.fit(user_item.T.tocsr())
user_item_csr = user_item.tocsr()
rows = []
for user_id in tqdm.tqdm_notebook(test_users.user_id.values):
items = [(picture_id, score) for picture_id, score in model.recommend(user_id, user_item_csr, N=output_candidates_count)]
rows.append(items)
test_users['predictions_full'] = [
p
for p, user_id in zip(
rows,
test_users.user_id.values
)
]
test_users['predictions'] = [
[x[0] for x in p]
for p, user_id in zip(
rows,
test_users.user_id.values
)
]
return test_users
test_users = als_baseline(
data, test_users, factors_count, last_days, trail_days, output_candidates_count, last_days_weight)
```
## Calculate history clicks to exclude them from results. Such clicks are excluded from test set according to task
```
clicked = data.groupby('user_id').agg({'picture_id': set})
def substract_clicked(p, c):
filtered = [picture for picture in p if picture not in c][:100]
return filtered
```
## Heuristical approach to reweight ALS score according to picture predicted popularity
Recommender returns (picture, score) pairs sorted decreasing for every user.
For every user we replace picture $score_p$ with $score_p \cdot (1 + popularity_{p})^{0.2}$
$popularity_{p}$ - popularity predicted for this picture for next week
This slightly moves popular pictures to the top of list for every user
```
import math
rows = test_users['predictions_full']
def correct_with_popularity(items, picture, alpha):
return sorted([
(score * (1 + picture.get(picture_id, 0)) ** alpha, picture_id, score, picture.get(picture_id, 0))
for picture_id, score in items], reverse=True
)
corrected_rows = [
[x[1] for x in correct_with_popularity(items, picture, heuristic_alpha)]
for items in rows
]
```
## Submission formatting
```
test_users['predictions'] = [
' '.join(map(str,
substract_clicked(p, {} if user_id not in clicked.index else clicked.loc[user_id][0])
))
for p, user_id in zip(
corrected_rows,
test_users.user_id.values
)
]
test_users[['user_id', 'predictions']].to_csv('submit.csv', index=False)
```
| github_jupyter |
This challenge implements an instantiation of OTR based on AES block cipher with modified version 1.0. OTR, which stands for Offset Two-Round, is a blockcipher mode of operation to realize an authenticated encryption with associated data (see [[1]](#1)). AES-OTR algorithm is a campaign of CAESAR competition, it has successfully entered the third round of screening by virtue of its unique advantages, you can see the whole algorithms and structure of AES-OTR from the design document (see [[2]](#2)).
However, the first version is vulnerable to forgery attacks in the known plaintext conditions and association data and public message number are reused, many attacks can be applied here to forge an excepted ciphertext with a valid tag (see [[3]](#3)).
For example, in this challenge we can build the following three plaintexts:
```
M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????']
```
Here `'111111111111'` can represent any value since the server won't check whether the message and its corresponding hash value match, so we just need to make sure that they are at the right length. If you look closely, you will find that none of the three plaintexts contains illegal fields, so we can use the encrypt Oracle provided by the server to get their corresponding ciphertexts easily. Next, noticed that these plaintexts satisfied:
```
from Crypto.Util.strxor import strxor
M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????']
strxor(M_0[1], M_0[3]) == strxor(M_1[1], M_2[3])
```
So according to the forgery attacks described in [[3]](#3), suppose their corresponding ciphertexts are `C_0`, `C_1` and `C_2`, then we can forge a valid ciphertext and tag using:
```
from Toy_AE import Toy_AE
def unpack(r):
data = r.split(b"\xff")
uid, uname, token, cmd, appendix = int(data[0][4:]), data[1][9:], data[2][2:], data[3][4:], data[4]
return (uid, uname, token, cmd, appendix)
ae = Toy_AE()
M_0 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_1 = [b'Uid=16111\xffUserNa', b'me=Administrator', b'r\xffT=11111111111\xff', b'Cmd=Give_Me_FlaG', b'\xff???????????????']
M_2 = [b'Uid=16112\xffUserNa', b'me=AdministratoR', b'\xffT=111111111111\xff', b'Cmd=Give_Me_Flag', b'g\xff??????????????']
C_0, T_0 = ae.encrypt(b''.join(M_0))
C_1, T_1 = ae.encrypt(b''.join(M_1))
C_2, T_2 = ae.encrypt(b''.join(M_2))
C_forge = C_1[:32] + C_2[32:64] + C_0[64:]
T_forge = T_0
_, uname, _, cmd, _ = unpack(ae.decrypt(C_forge, T_forge))
uname == b"Administrator" and cmd == b"Give_Me_Flag"
```
Here is my final exp:
```
import string
from pwn import *
from hashlib import sha256
from Crypto.Util.strxor import strxor
from Crypto.Util.number import long_to_bytes, bytes_to_long
def bypass_POW(io):
chall = io.recvline()
post = chall[14:30]
tar = chall[38:-2]
io.recvuntil(':')
found = iters.bruteforce(lambda x:sha256((x + post.decode()).encode()).hexdigest() == tar.decode(), string.ascii_letters + string.digits, 4)
io.sendline(found.encode())
C = []
T = []
io = remote("123.57.4.93", 45216)
bypass_POW(io)
io.sendlineafter(b"Your option:", '1')
io.sendlineafter(b"Set up your user id:", '16108')
io.sendlineafter(b"Your username:", 'AdministratoR')
io.sendlineafter(b"Your command:", 'Give_Me_FlaG')
io.sendlineafter(b"Any Appendix?", "???????????????")
_ = io.recvuntil(b"Your ticket:")
C.append(long_to_bytes(int(io.recvline().strip(), 16)))
_ = io.recvuntil(b"With my Auth:")
T.append(long_to_bytes(int(io.recvline().strip(), 16)))
io.sendlineafter(b"Your option:", '1')
io.sendlineafter(b"Set up your user id:", '16107')
io.sendlineafter(b"Your username:", 'Administratorr')
io.sendlineafter(b"Your command:", 'Give_Me_FlaG')
io.sendlineafter(b"Any Appendix?", "???????????????")
_ = io.recvuntil(b"Your ticket:")
C.append(long_to_bytes(int(io.recvline().strip(), 16)))
_ = io.recvuntil(b"With my Auth:")
T.append(long_to_bytes(int(io.recvline().strip(), 16)))
io.sendlineafter(b"Your option:", '1')
io.sendlineafter(b"Set up your user id:", '16108')
io.sendlineafter(b"Your username:", 'AdministratoR')
io.sendlineafter(b"Your command:", 'Give_Me_Flagg')
io.sendlineafter(b"Any Appendix?", "??????????????")
_ = io.recvuntil(b"Your ticket:")
C.append(long_to_bytes(int(io.recvline().strip(), 16)))
_ = io.recvuntil(b"With my Auth:")
T.append(long_to_bytes(int(io.recvline().strip(), 16)))
ct = (C[1][:32] + C[2][32:64] + C[0][64:]).hex()
te = T[0].hex()
io.sendlineafter(b"Your option:", '2')
io.sendlineafter(b"Ticket:", ct)
io.sendlineafter(b"Auth:", te)
flag = io.recvline().strip().decode()
print(flag)
```
b'X-NUCA{Gentlem3n_as_0f_th1s_mOment_I aM_th4t_sec0nd_mouse}'
**P.S.**
* The version used in this challenge is v 1.0, some vulnerabilities have been fixed in subsequent versions(v 2.0, v 3.0 and v 3.1), you can see the final version at [[4]](#4). Also, for some attacks on the new version, see [[5]](#5) and [[6]](#6).
* The content of the FLAG is a quote from movie *Catch Me If You Can* "Two little mice fell in a bucket of cream. The first mouse quickly gave up and drowned. The second mouse, wouldn't quit. He struggled so hard that eventually he churned that cream into butter and crawled out. Gentlemen, as of this moment, I am that second mouse."
**References**
<a id="1" href="https://eprint.iacr.org/2013/628.pdf"> [1] Minematsu K. Parallelizable rate-1 authenticated encryption from pseudorandom functions[C]//Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, Berlin, Heidelberg, 2014: 275-292.</a>
<a id="2" href="https://competitions.cr.yp.to/round1/aesotrv1.pdf"> [2] Minematsu K. AES-OTR v1 design document.</a>
<a id="3" href="http://www.shcas.net/jsjyup/pdf/2017/10/对认证加密算法AES-OTR的伪造攻击.pdf"> [3] Xiulin Zheng, Yipeng Fu, Haiyan Song. Forging attacks on authenticated encryption algorithm AES-OTR[J]. Computer Applications and Software, 2017, 034(010):320-324,329.</a>
<a id="4" href="https://competitions.cr.yp.to/round1/aesotrv1.pdf"> [4] Minematsu K. AES-OTR v3.1 design document.</a>
<a id="5" href="https://eprint.iacr.org/2017/332.pdf">[5] Forler, Christian, et al. "Reforgeability of authenticated encryption schemes." Australasian Conference on Information Security and Privacy. Springer, Cham, 2017.</a>
<a id="6" href="https://eprint.iacr.org/2017/1147.pdf">[6] Vaudenay, Serge, and Damian Vizár. "Under Pressure: Security of Caesar Candidates beyond their Guarantees." IACR Cryptol. ePrint Arch. 2017 (2017): 1147.</a>
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import torch
from sklearn.model_selection import train_test_split
from sklearn.ensemble import IsolationForest
cubism_path = "/home/hopkinsl/Downloads/wikiart/wikiart/Cubism"
listdir = os.listdir(cubism_path)
image_names = []
labels = []
for file in listdir:
if file.startswith('pablo-picasso'):
image_names.append(file)
labels.append(0)
elif file.startswith('marevna'):
image_names.append(file)
labels.append(1)
elif file.startswith('fernand-leger'):
image_names.append(file)
labels.append(2)
print(labels.count(0))
print(labels.count(1))
print(labels.count(2))
print(listdir[0])
#Reading, reshaping, and normalizing images
#images = plt.imread(data)
#labels = pandas.read_csv()
images = torch.zeros
for file in image_names:
images = images.reshape(len(images),-1)
images = (images - images.mean()) / images.std())
images_train, images_test, labels_train, labels_test = train_test_split(images, labels, test_size=0.33, random_state=42)
class MLP(torch.nn.Module):
# this defines the model
def __init__(self, sizes):
super(MLP, self).__init__()
print(sizes)
self.sizes = sizes
self.layers = [torch.nn.Linear(sizes[i], sizes[i+1]) for i in range(len(sizes)-1)]
self.sigmoid = torch.nn.Sigmoid()
self.relu = torch.nn.ReLU()
self.softmax = torch.nn.Softmax()
def forward(self, x):
temp_activation = x
temp_layer = self.layers[0](temp_activation)
for i in range(1, len(layers)):
temp_activation = self.sigmoid(temp_layer)
temp_layer = self.layers[i](temp_activation)
return self.softmax(temp_layer)
def train_model(training_data, test_data,training_labels,test_labels, model):
# define the optimization
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.007,momentum=0.9)
for epoch in range(10):
# clear the gradient
optimizer.zero_grad()
# compute the model output
myoutput = model(training_data)
# calculate loss
loss = criterion(myoutput, training_labels)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
# STUDENTS ADD THIS PART
output_test = model(test_data)
loss_test = criterion(output_test, test_labels)
plt.plot(epoch,loss.detach().numpy(),'ko')
plt.plot(epoch,loss_test.detach().numpy(),'ro')
print(epoch,loss.detach().numpy())
plt.show()
sizes = [len(images[0,:]), 30,30,23]
train_model(images_train, images_test, labels_train, labels_test, MLP())
```
| github_jupyter |
### Easy string manipulation
```
x = 'a string'
y = "a string"
if x == y:
print("they are the same")
fox = "tHe qUICk bROWn fOx."
```
To convert the entire string into upper-case or lower-case, you can use the ``upper()`` or ``lower()`` methods respectively:
```
fox.upper()
fox.lower()
```
A common formatting need is to capitalize just the first letter of each word, or perhaps the first letter of each sentence.
This can be done with the ``title()`` and ``capitalize()`` methods:
```
fox.title()
fox.capitalize()
```
The cases can be swapped using the ``swapcase()`` method:
```
fox.swapcase()
line = ' this is the content '
line.strip()
```
To remove just space to the right or left, use ``rstrip()`` or ``lstrip()`` respectively:
```
line.rstrip()
line.lstrip()
```
To remove characters other than spaces, you can pass the desired character to the ``strip()`` method:
```
num = "000000000000435"
num.strip('0')
line = 'the quick brown fox jumped over a lazy dog'
line.find('fox')
line.index('fox')
line[16:21]
```
The only difference between ``find()`` and ``index()`` is their behavior when the search string is not found; ``find()`` returns ``-1``, while ``index()`` raises a ``ValueError``:
```
line.find('bear')
line.index('bear')
line.partition('fox')
```
The ``rpartition()`` method is similar, but searches from the right of the string.
The ``split()`` method is perhaps more useful; it finds *all* instances of the split-point and returns the substrings in between.
The default is to split on any whitespace, returning a list of the individual words in a string:
```
line_list = line.split()
print(line_list)
print(line_list[1])
```
A related method is ``splitlines()``, which splits on newline characters.
Let's do this with a Haiku, popularly attributed to the 17th-century poet Matsuo Bashō:
```
haiku = """matsushima-ya
aah matsushima-ya
matsushima-ya"""
haiku.splitlines()
```
Note that if you would like to undo a ``split()``, you can use the ``join()`` method, which returns a string built from a splitpoint and an iterable:
```
'--'.join(['1', '2', '3'])
```
A common pattern is to use the special character ``"\n"`` (newline) to join together lines that have been previously split, and recover the input:
```
print("\n".join(['matsushima-ya', 'aah matsushima-ya', 'matsushima-ya']))
pi = 3.14159
str(pi)
print ("The value of pi is " + pi)
```
Pi is a float number so it must be transform to sting.
```
print( "The value of pi is " + str(pi))
```
A more flexible way to do this is to use *format strings*, which are strings with special markers (noted by curly braces) into which string-formatted values will be inserted.
Here is a basic example:
```
"The value of pi is {}".format(pi)
```
### Easy regex manipulation!
```
import re
line = 'the quick brown fox jumped over a lazy dog'
```
With this, we can see that the ``regex.search()`` method operates a lot like ``str.index()`` or ``str.find()``:
```
line.index('fox')
regex = re.compile('fox')
match = regex.search(line)
match.start()
```
Similarly, the ``regex.sub()`` method operates much like ``str.replace()``:
```
line.replace('fox', 'BEAR')
regex.sub('BEAR', line)
```
The following is a table of the repetition markers available for use in regular expressions:
| Character | Description | Example |
|-----------|-------------|---------|
| ``?`` | Match zero or one repetitions of preceding | ``"ab?"`` matches ``"a"`` or ``"ab"`` |
| ``*`` | Match zero or more repetitions of preceding | ``"ab*"`` matches ``"a"``, ``"ab"``, ``"abb"``, ``"abbb"``... |
| ``+`` | Match one or more repetitions of preceding | ``"ab+"`` matches ``"ab"``, ``"abb"``, ``"abbb"``... but not ``"a"`` |
| ``.`` | Any character | ``.*`` matches everything |
| ``{n}`` | Match ``n`` repetitions of preeeding | ``"ab{2}"`` matches ``"abb"`` |
| ``{m,n}`` | Match between ``m`` and ``n`` repetitions of preceding | ``"ab{2,3}"`` matches ``"abb"`` or ``"abbb"`` |
```
bool(re.search(r'ab', "Boabab"))
bool(re.search(r'.*ma.*', "Ala ma kota"))
bool(re.search(r'.*(psa|kota).*', "Ala ma kota"))
bool(re.search(r'.*(psa|kota).*', "Ala ma psa"))
bool(re.search(r'.*(psa|kota).*', "Ala ma chomika"))
zdanie = "Ala ma kota."
wzor = r'.*' #pasuje do każdego zdania
zamiennik = r"Ala ma psa."
re.sub(wzor, zamiennik, zdanie)
wzor = r'(.*)kota.'
zamiennik = r"\1 psa."
re.sub(wzor, zamiennik, zdanie)
wzor = r'(.*)ma(.*)'
zamiennik = r"\1 posiada \2"
re.sub(wzor, zamiennik, zdanie)
```
| github_jupyter |
```
import re
import os
import keras.backend as K
import numpy as np
import pandas as pd
from keras import layers, models, utils
import json
def reset_everything():
import tensorflow as tf
%reset -f in out dhist
tf.reset_default_graph()
K.set_session(tf.InteractiveSession())
# Constants for our networks. We keep these deliberately small to reduce training time.
VOCAB_SIZE = 250000
EMBEDDING_SIZE = 100
MAX_DOC_LEN = 128
MIN_DOC_LEN = 12
def extract_stackexchange(filename, limit=1000000):
json_file = filename + 'limit=%s.json' % limit
rows = []
for i, line in enumerate(os.popen('7z x -so "%s" Posts.xml' % filename)):
line = str(line)
if not line.startswith(' <row'):
continue
if i % 1000 == 0:
print('\r%05d/%05d' % (i, limit), end='', flush=True)
parts = line[6:-5].split('"')
record = {}
for i in range(0, len(parts), 2):
k = parts[i].replace('=', '').strip()
v = parts[i+1].strip()
record[k] = v
rows.append(record)
if len(rows) > limit:
break
with open(json_file, 'w') as fout:
json.dump(rows, fout)
return rows
xml_7z = utils.get_file(
fname='travel.stackexchange.com.7z',
origin='https://ia800107.us.archive.org/27/items/stackexchange/travel.stackexchange.com.7z',
)
print()
rows = extract_stackexchange(xml_7z)
```
# Data Exploration
Now that we have extracted our data, let's clean it up and take a look at what we have to work with.
```
df = pd.DataFrame.from_records(rows)
df = df.set_index('Id', drop=False)
df['Title'] = df['Title'].fillna('').astype('str')
df['Tags'] = df['Tags'].fillna('').astype('str')
df['Body'] = df['Body'].fillna('').astype('str')
df['Id'] = df['Id'].astype('int')
df['PostTypeId'] = df['PostTypeId'].astype('int')
df['ViewCount'] = df['ViewCount'].astype('float')
df.head()
list(df[df['ViewCount'] > 250000]['Title'])
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=VOCAB_SIZE)
tokenizer.fit_on_texts(df['Body'] + df['Title'])
# Compute TF/IDF Values
total_count = sum(tokenizer.word_counts.values())
idf = { k: np.log(total_count/v) for (k,v) in tokenizer.word_counts.items() }
# Download pre-trained word2vec embeddings
import gensim
glove_100d = utils.get_file(
fname='glove.6B.100d.txt',
origin='https://storage.googleapis.com/deep-learning-cookbook/glove.6B.100d.txt',
)
w2v_100d = glove_100d + '.w2v'
from gensim.scripts.glove2word2vec import glove2word2vec
glove2word2vec(glove_100d, w2v_100d)
w2v_model = gensim.models.KeyedVectors.load_word2vec_format(w2v_100d)
w2v_weights = np.zeros((VOCAB_SIZE, w2v_model.syn0.shape[1]))
idf_weights = np.zeros((VOCAB_SIZE, 1))
for k, v in tokenizer.word_index.items():
if v >= VOCAB_SIZE:
continue
if k in w2v_model:
w2v_weights[v] = w2v_model[k]
idf_weights[v] = idf[k]
del w2v_model
df['title_tokens'] = tokenizer.texts_to_sequences(df['Title'])
df['body_tokens'] = tokenizer.texts_to_sequences(df['Body'])
import random
# We can create a data generator that will randomly title and body tokens for questions. We'll use random text
# from other questions as a negative example when necessary.
def data_generator(batch_size, negative_samples=1):
questions = df[df['PostTypeId'] == 1]
all_q_ids = list(questions.index)
batch_x_a = []
batch_x_b = []
batch_y = []
def _add(x_a, x_b, y):
batch_x_a.append(x_a[:MAX_DOC_LEN])
batch_x_b.append(x_b[:MAX_DOC_LEN])
batch_y.append(y)
while True:
questions = questions.sample(frac=1.0)
for i, q in questions.iterrows():
_add(q['title_tokens'], q['body_tokens'], 1)
negative_q = random.sample(all_q_ids, negative_samples)
for nq_id in negative_q:
_add(q['title_tokens'], df.at[nq_id, 'body_tokens'], 0)
if len(batch_y) >= batch_size:
yield ({
'title': pad_sequences(batch_x_a, maxlen=None),
'body': pad_sequences(batch_x_b, maxlen=None),
}, np.asarray(batch_y))
batch_x_a = []
batch_x_b = []
batch_y = []
# dg = data_generator(1, 2)
# next(dg)
# next(dg)
```
# Embedding Lookups
Let's define a helper class for looking up our embedding results. We'll use it
to verify our models.
```
questions = df[df['PostTypeId'] == 1]['Title'].reset_index(drop=True)
question_tokens = pad_sequences(tokenizer.texts_to_sequences(questions))
class EmbeddingWrapper(object):
def __init__(self, model):
self._r = questions
self._i = {i:s for (i, s) in enumerate(questions)}
self._w = model.predict({'title': question_tokens}, verbose=1, batch_size=1024)
self._model = model
self._norm = np.sqrt(np.sum(self._w * self._w + 1e-5, axis=1))
def nearest(self, sentence, n=10):
x = tokenizer.texts_to_sequences([sentence])
if len(x[0]) < MIN_DOC_LEN:
x[0] += [0] * (MIN_DOC_LEN - len(x))
e = self._model.predict(np.asarray(x))[0]
norm_e = np.sqrt(np.dot(e, e))
dist = np.dot(self._w, e) / (norm_e * self._norm)
top_idx = np.argsort(dist)[-n:]
return pd.DataFrame.from_records([
{'question': self._r[i], 'dist': float(dist[i])}
for i in top_idx
])
# Our first model will just sum up the embeddings of each token.
# The similarity between documents will be the dot product of the final embedding.
import tensorflow as tf
def sum_model(embedding_size, vocab_size, embedding_weights=None, idf_weights=None):
title = layers.Input(shape=(None,), dtype='int32', name='title')
body = layers.Input(shape=(None,), dtype='int32', name='body')
def make_embedding(name):
if embedding_weights is not None:
embedding = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=w2v_weights.shape[1],
weights=[w2v_weights], trainable=False,
name='%s/embedding' % name)
else:
embedding = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=embedding_size,
name='%s/embedding' % name)
if idf_weights is not None:
idf = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=1,
weights=[idf_weights], trainable=False,
name='%s/idf' % name)
else:
idf = layers.Embedding(mask_zero=True, input_dim=vocab_size, output_dim=1,
name='%s/idf' % name)
return embedding, idf
embedding_a, idf_a = make_embedding('a')
embedding_b, idf_b = embedding_a, idf_a
# embedding_b, idf_b = make_embedding('b')
mask = layers.Masking(mask_value=0)
def _combine_and_sum(args):
[embedding, idf] = args
return K.sum(embedding * K.abs(idf), axis=1)
sum_layer = layers.Lambda(_combine_and_sum, name='combine_and_sum')
sum_a = sum_layer([mask(embedding_a(title)), idf_a(title)])
sum_b = sum_layer([mask(embedding_b(body)), idf_b(body)])
sim = layers.dot([sum_a, sum_b], axes=1, normalize=True)
sim_model = models.Model(
inputs=[title, body],
outputs=[sim],
)
sim_model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy'])
sim_model.summary()
embedding_model = models.Model(
inputs=[title],
outputs=[sum_a]
)
return sim_model, embedding_model
# Try using our model with pretrained weights from word2vec
sum_model_precomputed, sum_embedding_precomputed = sum_model(
embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE,
embedding_weights=w2v_weights, idf_weights=idf_weights
)
x, y = next(data_generator(batch_size=4096))
sum_model_precomputed.evaluate(x, y)
SAMPLE_QUESTIONS = [
'Roundtrip ticket versus one way',
'Shinkansen from Kyoto to Hiroshima',
'Bus tour of Germany',
]
def evaluate_sample(lookup):
pd.set_option('display.max_colwidth', 100)
results = []
for q in SAMPLE_QUESTIONS:
print(q)
q_res = lookup.nearest(q, n=4)
q_res['result'] = q_res['question']
q_res['question'] = q
results.append(q_res)
return pd.concat(results)
lookup = EmbeddingWrapper(model=sum_embedding_precomputed)
evaluate_sample(lookup)
```
# Training our own network
The results are okay but not great... instead of using the word2vec embeddings, what happens if we train our network end-to-end?
```
sum_model_trained, sum_embedding_trained = sum_model(
embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE,
embedding_weights=None,
idf_weights=None
)
sum_model_trained.fit_generator(
data_generator(batch_size=128),
epochs=10,
steps_per_epoch=1000
)
lookup = EmbeddingWrapper(model=sum_embedding_trained)
evaluate_sample(lookup)
```
## CNN Model
Using a sum-of-embeddings model works well. What happens if we try to make a simple CNN model?
```
def cnn_model(embedding_size, vocab_size):
title = layers.Input(shape=(None,), dtype='int32', name='title')
body = layers.Input(shape=(None,), dtype='int32', name='body')
embedding = layers.Embedding(
mask_zero=False,
input_dim=vocab_size,
output_dim=embedding_size,
)
def _combine_sum(v):
return K.sum(v, axis=1)
cnn_1 = layers.Convolution1D(256, 3)
cnn_2 = layers.Convolution1D(256, 3)
cnn_3 = layers.Convolution1D(256, 3)
global_pool = layers.GlobalMaxPooling1D()
local_pool = layers.MaxPooling1D(strides=2, pool_size=3)
def forward(input):
embed = embedding(input)
return global_pool(
cnn_2(local_pool(cnn_1(embed))))
sum_a = forward(title)
sum_b = forward(body)
sim = layers.dot([sum_a, sum_b], axes=1, normalize=False)
sim_model = models.Model(
inputs=[title, body],
outputs=[sim],
)
sim_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
embedding_model = models.Model(
inputs=[title],
outputs=[sum_a]
)
return sim_model, embedding_model
cnn, cnn_embedding = cnn_model(embedding_size=25, vocab_size=VOCAB_SIZE)
cnn.summary()
cnn.fit_generator(
data_generator(batch_size=128),
epochs=10,
steps_per_epoch=1000,
)
lookup = EmbeddingWrapper(model=cnn_embedding)
evaluate_sample(lookup)
```
## LSTM Model
We can also make an LSTM model. Warning, this will be very slow to train and evaluate unless you have a relatively fast GPU to run it on!
```
def lstm_model(embedding_size, vocab_size):
title = layers.Input(shape=(None,), dtype='int32', name='title')
body = layers.Input(shape=(None,), dtype='int32', name='body')
embedding = layers.Embedding(
mask_zero=True,
input_dim=vocab_size,
output_dim=embedding_size,
# weights=[w2v_weights],
# trainable=False
)
lstm_1 = layers.LSTM(units=512, return_sequences=True)
lstm_2 = layers.LSTM(units=512, return_sequences=False)
sum_a = lstm_2(lstm_1(embedding(title)))
sum_b = lstm_2(lstm_1(embedding(body)))
sim = layers.dot([sum_a, sum_b], axes=1, normalize=True)
# sim = layers.Activation(activation='sigmoid')(sim)
sim_model = models.Model(
inputs=[title, body],
outputs=[sim],
)
sim_model.compile(loss='binary_crossentropy', optimizer='rmsprop')
embedding_model = models.Model(
inputs=[title],
outputs=[sum_a]
)
return sim_model, embedding_model
lstm, lstm_embedding = lstm_model(embedding_size=EMBEDDING_SIZE, vocab_size=VOCAB_SIZE)
lstm.summary()
lstm.fit_generator(
data_generator(batch_size=128),
epochs=10,
steps_per_epoch=100,
)
lookup = EmbeddingWrapper(model=lstm_embedding)
evaluate_sample(lookup)
```
| github_jupyter |
```
import os, sys
# os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
import numpy as np
import imageio
import json
import random
import time
import pprint
import matplotlib.pyplot as plt
import run_nerf
from load_llff import load_llff_data
from load_deepvoxels import load_dv_data
from load_blender import load_blender_data
basedir = './logs'
expname = 'fern_example'
config = os.path.join(basedir, expname, 'config.txt')
print('Args:')
print(open(config, 'r').read())
parser = run_nerf.config_parser()
args = parser.parse_args('--config {} --ft_path {}'.format(config, os.path.join(basedir, expname, 'model_200000.npy')))
print('loaded args')
images, poses, bds, render_poses, i_test = load_llff_data(args.datadir, args.factor,
recenter=True, bd_factor=.75,
spherify=args.spherify)
H, W, focal = poses[0,:3,-1].astype(np.float32)
H = int(H)
W = int(W)
hwf = [H, W, focal]
images = images.astype(np.float32)
poses = poses.astype(np.float32)
if args.no_ndc:
near = tf.reduce_min(bds) * .9
far = tf.reduce_max(bds) * 1.
else:
near = 0.
far = 1.
# Create nerf model
_, render_kwargs_test, start, grad_vars, models = run_nerf.create_nerf(args)
bds_dict = {
'near' : tf.cast(near, tf.float32),
'far' : tf.cast(far, tf.float32),
}
render_kwargs_test.update(bds_dict)
print('Render kwargs:')
pprint.pprint(render_kwargs_test)
down = 4
render_kwargs_fast = {k : render_kwargs_test[k] for k in render_kwargs_test}
render_kwargs_fast['N_importance'] = 0
c2w = np.eye(4)[:3,:4].astype(np.float32) # identity pose matrix
test = run_nerf.render(H//down, W//down, focal/down, c2w=c2w, **render_kwargs_fast)
img = np.clip(test[0],0,1)
plt.imshow(img)
plt.show()
down = 8 # trade off resolution+aliasing for render speed to make this video faster
frames = []
for i, c2w in enumerate(render_poses):
if i%8==0: print(i)
test = run_nerf.render(H//down, W//down, focal/down, c2w=c2w[:3,:4], **render_kwargs_fast)
frames.append((255*np.clip(test[0],0,1)).astype(np.uint8))
print('done, saving')
f = 'logs/fern_example/video.mp4'
imageio.mimwrite(f, frames, fps=30, quality=8)
from IPython.display import Video
Video(f, height=320)
%matplotlib inline
from ipywidgets import interactive, widgets
import matplotlib.pyplot as plt
import numpy as np
def f(x, y, z):
c2w = tf.convert_to_tensor([
[1,0,0,x],
[0,1,0,y],
[0,0,1,z],
[0,0,0,1],
], dtype=tf.float32)
test = run_nerf.render(H//down, W//down, focal/down, c2w=c2w, **render_kwargs_fast)
img = np.clip(test[0],0,1)
plt.figure(2, figsize=(20,6))
plt.imshow(img)
plt.show()
sldr = lambda : widgets.FloatSlider(
value=0.,
min=-1.,
max=1.,
step=.01,
)
names = ['x', 'y', 'z']
interactive_plot = interactive(f, **{n : sldr() for n in names})
interactive_plot
!conda install -c conda-forge ipywidgets
```
| github_jupyter |
```
from platform import python_version
import tensorflow as tf
print(tf.test.is_gpu_available())
print(python_version())
import os
import numpy as np
from os import listdir
from PIL import Image
import time
import tensorflow as tf
from tensorflow.keras import layers,models,optimizers
from keras import backend as K
import matplotlib.pyplot as plt
path1="datasets/ofg_family/"
path2="datasets/TSKinFace_Data/TSKinFace_cropped/"
randomiser = np.random.RandomState(123)
img_size = 64
mean = 0.0009
std_dev = 0.009
lr = 0.0005
b1 = 0.875
b2 = 0.975
sd_random_normal_init = 0.02
EPOCHS = 10
batch = 10
def generate_image_1(family_dir):
dic={}
sub=[a for a in listdir(path1+"/"+family_dir)]
for ele in sub:
if ele == '.DS_Store':
continue;
mypath = path1+"/"+family_dir+"/"+ele+"/"
onlyfiles = [mypath+f for f in listdir(mypath)]
addr = randomiser.choice(onlyfiles)
original_img = np.array(Image.open(addr).resize((64,64),Image.ANTIALIAS))
if ele[0].lower()=='f':
dic['father'] = original_img
elif ele[0].lower()=='m':
dic['mother'] = original_img
elif ele.lower()=='child_male':
dic['child'] = original_img
dic['gender']=np.zeros((original_img.shape))
elif ele.lower()=='child_female':
dic['child'] = original_img
dic['gender'] = np.ones((original_img.shape))
return [dic['father'],dic['mother'],dic['gender'],dic['child']]
def generate_image_2(family_dir, family_number, gender):
dic={}
sub = ["F" , "M", gender]
family_pth = path2+"/"+family_dir+"/" + family_dir + "-" + str(family_number) + "-"
for ele in sub:
addr = family_pth+ele+".jpg"
original_img = np.array(Image.open(addr).resize((64,64),Image.ANTIALIAS))
if ele =='F':
dic['father'] = original_img
elif ele == 'M':
dic['mother'] = original_img
elif ele == 'S':
dic['child'] = original_img
dic['gender']=np.zeros((original_img.shape))
elif ele == 'D':
dic['child'] = original_img
dic['gender'] = np.ones((original_img.shape))
return [dic['father'],dic['mother'],dic['gender'],dic['child']]
def generate_batch(families_batch):
np_images=[]
for family in families_batch:
if(len(family) == 3):
res = generate_image_2(family[0], family[1], family[2])
elif(len(family) == 1):
res = generate_image_1(family[0])
if( res != None):
np_images.append(res)
return np_images
for r, d, f in os.walk(path1):
all_families = d
break
all_families = [[family] for family in all_families]
for i in range(285):
all_families.append(['FMS', i+1, 'S'])
for i in range(274):
all_families.append(['FMD', i+1, 'D'])
for i in range(228):
all_families.append(['FMSD', i+1, 'D'])
all_families.append(['FMSD', i+1, 'S'])
randomiser.shuffle(all_families)
train_families = all_families[:-100]
test_families = all_families[-100:]
OUTPUT_CHANNELS = 3
def gen_downsample_parent(filters, size, apply_batchnorm=True, apply_dropout=False):
initializer = tf.random_normal_initializer(mean, std_dev)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
kernel_initializer=initializer,
use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.ELU())
if apply_dropout:
result.add(tf.keras.layers.Dropout(rate = 0.5))
return result
```
```
def gen_upsample(filters, size,apply_batchnorm = False):
initializer = tf.random_normal_initializer(mean, std_dev)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.ELU())
return result
def EncoderNN():
down_stack_parent = [
gen_downsample_parent(32,4,apply_batchnorm=True, apply_dropout=False),
gen_downsample_parent(64,4,apply_batchnorm=True, apply_dropout=False)
]
# down_stack_noise =[
# # z = 4x4x64
# gen_downsample_noise(64,4,apply_batchnorm=True), #8x8x64
# gen_downsample_noise(32,4,apply_batchnorm=True) #16x16x32
# ]
final_conv =[
gen_upsample(32,4 ,apply_batchnorm = True)
]
initializer = tf.random_normal_initializer(mean, sd_random_normal_init)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh')
concat = tf.keras.layers.Concatenate()
father = tf.keras.layers.Input(shape=(img_size,img_size,3))
mother = tf.keras.layers.Input(shape=(img_size,img_size,3))
x1 = father
for down in down_stack_parent:
x1 = down(x1)
# print(x1.shape)
x2 = mother
for down in down_stack_parent:
x2 = down(x2)
# print(x2.shape)
final = concat([x1,x2])
# print(final.shape)
final = final_conv[0](final)
final = last(final)
# print(final.shape)
return tf.keras.Model(inputs=[father, mother], outputs=final)
encoder_optimizer = tf.keras.optimizers.Adam(learning_rate = lr, beta_1=b1)
def tensor_to_array(tensor1):
return tensor1.numpy()
def train_encoder(father_batch, mother_batch, target_batch, b_size):
with tf.GradientTape() as enc_tape:
gen_outputs = encoder([father_batch, mother_batch], training=True)
diff = tf.abs(target_batch - gen_outputs)
flatten_diff = tf.reshape(diff, (b_size, img_size*img_size*3))
encoder_loss_batch = tf.reduce_mean(flatten_diff, axis=1)
encoder_loss = tf.reduce_mean(encoder_loss_batch)
print("ENCODER_LOSS: ",tensor_to_array(encoder_loss))
#calculate gradients
encoder_gradients = enc_tape.gradient(encoder_loss,encoder.trainable_variables)
#apply gradients on optimizer
encoder_optimizer.apply_gradients(zip(encoder_gradients,encoder.trainable_variables))
def fit_encoder(train_ds, epochs, test_ds, batch):
losses=np.array([])
for epoch in range(epochs):
print("______________________________EPOCH %d_______________________________"%(epoch+1))
start = time.time()
for i in range(len(train_ds)//batch):
batch_data = np.asarray(generate_batch(train_ds[i*batch:(i+1)*batch]))
batch_data = batch_data / 255 * 2 -1
print("Generated batch", batch_data.shape)
X_Father_train = tf.convert_to_tensor(batch_data[:,0],dtype =tf.float32)
X_Mother_train = tf.convert_to_tensor(batch_data[:,1],dtype =tf.float32)
Y_train = tf.convert_to_tensor(batch_data[:,3],dtype =tf.float32)
train_encoder(X_Father_train, X_Mother_train, Y_train,batch)
print("Trained for batch %d/%d"%(i+1,(len(train_ds)//batch)))
print("______________________________TRAINING COMPLETED_______________________________")
train_dataset = all_families[:-100]
test_dataset = all_families[-100:]
encoder = EncoderNN()
with tf.device('/gpu:0'):
fit_encoder(train_dataset, EPOCHS, test_dataset,batch)
f_no = 1106
family_data = generate_batch([all_families[f_no]])
inp = [family_data[0][0],family_data[0][1]]
inp = tf.cast(inp, tf.float32)
father_inp = inp[0][tf.newaxis,...]
mother_inp = inp[1][tf.newaxis,...]
with tf.device('/cpu:0'):
gen_output = encoder([father_inp, mother_inp], training=True)
temp = gen_output.numpy()
plt.imshow(np.squeeze(temp))
# print(temp)
print(np.amin(temp))
print(np.amax(temp))
target = family_data[0][3]
plt.imshow(target)
```
###############################################################################################################################
```
def disc_downsample_parent_target(filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(mean, std_dev)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
kernel_initializer=initializer,
use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU(alpha = 0.2))
return result
def disc_loss(filters, size,apply_batchnorm = False):
initializer = tf.random_normal_initializer(mean, std_dev)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU(alpha = 0.2))
return result
def Discriminator():
father = tf.keras.layers.Input(shape=(img_size,img_size,3))
mother = tf.keras.layers.Input(shape=(img_size,img_size,3))
target = tf.keras.layers.Input(shape=(img_size,img_size,3))
down_stack_parent_target = [
disc_downsample_parent_target(32,4,apply_batchnorm=False), #32x32x32
disc_downsample_parent_target(64,4,apply_batchnorm=True) #16x16x64
]
down_stack_combined =[
disc_loss(192,4,apply_batchnorm=True),
disc_loss(256,4,apply_batchnorm=False)
]
initializer = tf.random_normal_initializer(mean, sd_random_normal_init)
last = tf.keras.layers.Conv2D(1, 4, strides=1,padding='same',
kernel_initializer=initializer) # linear layer
concat = tf.keras.layers.Concatenate()
x1 = father
for down in down_stack_parent_target:
x1 = down(x1)
x2 = mother
for down in down_stack_parent_target:
x2 = down(x2)
x3 = target
for down in down_stack_parent_target:
x3 = down(x3)
combined = concat([x1,x2,x3])
# combined is Batchx16x16x192
x4 = combined
for down in down_stack_combined:
x4 = down(x4)
# print(x4.shape)
output = last(x4) #4X4
print(output.shape)
return tf.keras.Model(inputs=[father,mother,target], outputs=output)
discriminator = Discriminator()
# family_data = generate_image(all_families[126])
# p1 = tf.cast(family_data[0], tf.float32)
# p2 = tf.cast(family_data[1], tf.float32)
# c = tf.cast(family_data[2], tf.float32)
# discriminator = Discriminator()
# with tf.device('/cpu:0'):
# disc_out = discriminator(inputs = [p1,p2,c], training=True)
LAMBDA = 1
def tensor_to_array(tensor1):
return tensor1.numpy()
def discriminator_loss(disc_real_output, disc_generated_output,b_size):
real_loss_diff = tf.abs(tf.ones_like(disc_real_output) - disc_real_output)
real_flatten_diff = tf.reshape(real_loss_diff, (b_size, 4*4*1))
real_loss_batch = tf.reduce_mean(real_flatten_diff, axis=1)
real_loss = tf.reduce_mean(real_loss_batch)
gen_loss_diff = tf.abs(tf.zeros_like(disc_generated_output) - disc_generated_output)
gen_flatten_diff = tf.reshape(gen_loss_diff, (b_size, 4*4*1))
gen_loss_batch = tf.reduce_mean(gen_flatten_diff, axis=1)
gen_loss = tf.reduce_mean(gen_loss_batch)
total_disc_loss = real_loss + gen_loss
return total_disc_loss
def generator_loss(disc_generated_output, gen_output, target,b_size):
gen_loss_diff = tf.abs(tf.ones_like(disc_generated_output) - disc_generated_output)
gen_flatten_diff = tf.reshape(gen_loss_diff, (b_size, 4*4*1))
gen_loss_batch = tf.reduce_mean(gen_flatten_diff, axis=1)
gen_loss = tf.reduce_mean(gen_loss_batch)
l1_loss_diff = tf.abs(target - gen_output)
l1_flatten_diff = tf.reshape(l1_loss_diff, (b_size, img_size*img_size*3))
l1_loss_batch = tf.reduce_mean(l1_flatten_diff, axis=1)
l1_loss = tf.reduce_mean(l1_loss_batch)
total_gen_loss = gen_loss + LAMBDA * l1_loss
# print("Reconstruction loss: {}, GAN loss: {}".format(l1_loss, gen_loss))
return total_gen_loss
generator_optimizer = tf.keras.optimizers.Adam(lr, beta_1=b1 ,beta_2 = b2)
discriminator_optimizer = tf.keras.optimizers.Adam(lr, beta_1=b1, beta_2 = b2)
def train_step(father_batch, mother_batch, target_batch,b_size):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_outputs = encoder([father_batch, mother_batch], training=True)
# print("Generated outputs",gen_outputs.shape)
disc_real_output = discriminator([father_batch, mother_batch, target_batch], training=True)
# print("disc_real_output ", disc_real_output.shape)
disc_generated_output = discriminator([father_batch, mother_batch, gen_outputs], training=True)
# print("disc_generated_output ", disc_generated_output.shape)
gen_loss = generator_loss(disc_generated_output, gen_outputs, target_batch,b_size)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output,b_size)
print("GEN_LOSS",tensor_to_array(gen_loss))
print("DISC_LOSS",tensor_to_array(disc_loss))
generator_gradients = gen_tape.gradient(gen_loss,encoder.trainable_variables)
discriminator_gradients = disc_tape.gradient(disc_loss,discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_gradients,encoder.trainable_variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,discriminator.trainable_variables))
def fit(train_ds, epochs, test_ds,batch):
for epoch in range(epochs):
print("______________________________EPOCH %d_______________________________"%(epoch))
start = time.time()
for i in range(len(train_ds)//batch):
batch_data = np.asarray(generate_batch(train_ds[i*batch:(i+1)*batch]))
batch_data = batch_data / 255 * 2 -1
print("Generated batch", batch_data.shape)
X_father_train = tf.convert_to_tensor(batch_data[:,0],dtype =tf.float32)
X_mother_train = tf.convert_to_tensor(batch_data[:,1],dtype =tf.float32)
# print("Xtrain",X_train.shape)
# print("Batch converted to tensor")
Y_train = tf.convert_to_tensor(batch_data[:,3],dtype =tf.float32)
train_step(X_father_train, X_mother_train, Y_train, batch)
print("Trained for batch %d/%d"%(i+1,(len(train_ds)//batch)))
# family_no = 400
# family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2])
# inp = [family_data[0],family_data[1]]
# inp = tf.cast(inp, tf.float32)
# father_inp = inp[0][tf.newaxis,...]
# mother_inp = inp[1][tf.newaxis,...]
# gen_output = encoder([father_inp, mother_inp], training=True)
# print(tf.reduce_min(gen_output))
# print(tf.reduce_max(gen_output))
# plt.figure()
# plt.imshow(gen_output[0,...])
# plt.show()
print("______________________________TRAINING COMPLETED_______________________________")
checkpoint.save(file_prefix = checkpoint_prefix)
concat = tf.keras.layers.Concatenate()
train_dataset = all_families[:-10]
test_dataset = all_families[-10:]
encoder = EncoderNN()
discriminator = Discriminator()
img_size = 64
mean = 0.
std_dev = 0.02
lr = 0.0005
b1 = 0.9
b2 = 0.999
sd_random_normal_init = 0.02
EPOCHS = 5
batch = 25
checkpoint_dir = './checkpoint'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=encoder,
discriminator=discriminator)
with tf.device('/gpu:0'):
fit(train_dataset, EPOCHS, test_dataset,batch)
family_no = 1011
family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2])
inp = [family_data[0],family_data[1]]
inp = tf.cast(inp, tf.float32)
father_inp = inp[0][tf.newaxis,...]
mother_inp = inp[1][tf.newaxis,...]
with tf.device('/gpu:0'):
gen_output = encoder([father_inp, mother_inp], training=True)
temp = gen_output.numpy()
plt.imshow(np.squeeze(temp))
print(np.amin(temp))
+print(np.amax(temp))
family_no = 1011
family_data = generate_image(all_families[family_no][0], all_families[family_no][1], all_families[family_no][2])
inp = [family_data[0],family_data[1]]
inp = tf.cast(inp, tf.float32)
father_inp = inp[0][tf.newaxis,...]
mother_inp = inp[1][tf.newaxis,...]
with tf.device('/gpu:0'):
gen_output = encoder([father_inp, mother_inp], training=True)
temp = gen_output.numpy()
plt.imshow(np.squeeze(temp))
print(np.amin(temp))
print(np.amax(temp))
```
| github_jupyter |
# AWS Marketplace Product Usage Demonstration - Algorithms
## Using Algorithm ARN with Amazon SageMaker APIs
This sample notebook demonstrates two new functionalities added to Amazon SageMaker:
1. Using an Algorithm ARN to run training jobs and use that result for inference
2. Using an AWS Marketplace product ARN - we will use [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title)
## Overall flow diagram
<img src="images/AlgorithmE2EFlow.jpg">
## Compatibility
This notebook is compatible only with [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title) sample algorithm published to AWS Marketplace.
***Pre-Requisite:*** Please subscribe to this free product before proceeding with this notebook
## Set up the environment
```
import sagemaker as sage
from sagemaker import get_execution_role
role = get_execution_role()
# S3 prefixes
common_prefix = "DEMO-scikit-byo-iris"
training_input_prefix = common_prefix + "/training-input-data"
batch_inference_input_prefix = common_prefix + "/batch-inference-input-data"
```
### Create the session
The session remembers our connection parameters to Amazon SageMaker. We'll use it to perform all of our Amazon SageMaker operations.
```
sagemaker_session = sage.Session()
```
## Upload the data for training
When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included.
We can use use the tools provided by the Amazon SageMaker Python SDK to upload the data to a default bucket.
```
TRAINING_WORKDIR = "data/training"
training_input = sagemaker_session.upload_data(TRAINING_WORKDIR, key_prefix=training_input_prefix)
print("Training Data Location " + training_input)
```
## Creating Training Job using Algorithm ARN
Please put in the algorithm arn you want to use below. This can either be an AWS Marketplace algorithm you subscribed to (or) one of the algorithms you created in your own account.
The algorithm arn listed below belongs to the [Scikit Decision Trees](https://aws.amazon.com/marketplace/pp/prodview-ha4f3kqugba3u?qid=1543169069960&sr=0-1&ref_=srh_res_product_title) product.
```
from src.scikit_product_arns import ScikitArnProvider
algorithm_arn = ScikitArnProvider.get_algorithm_arn(sagemaker_session.boto_region_name)
import json
import time
from sagemaker.algorithm import AlgorithmEstimator
algo = AlgorithmEstimator(
algorithm_arn=algorithm_arn,
role=role,
train_instance_count=1,
train_instance_type="ml.c4.xlarge",
base_job_name="scikit-from-aws-marketplace",
)
```
## Run Training Job
```
print(
"Now run the training job using algorithm arn %s in region %s"
% (algorithm_arn, sagemaker_session.boto_region_name)
)
algo.fit({"training": training_input})
```
## Automated Model Tuning (optional)
Since this algorithm supports tunable hyperparameters with a tuning objective metric, we can run a Hyperparameter Tuning Job to obtain the best training job hyperparameters and its corresponding model artifacts.
<img src="images/HPOFlow.jpg">
```
from sagemaker.tuner import HyperparameterTuner, IntegerParameter
## This demo algorithm supports max_leaf_nodes as the only tunable hyperparameter.
hyperparameter_ranges = {"max_leaf_nodes": IntegerParameter(1, 100000)}
tuner = HyperparameterTuner(
estimator=algo,
base_tuning_job_name="some-name",
objective_metric_name="validation:accuracy",
hyperparameter_ranges=hyperparameter_ranges,
max_jobs=2,
max_parallel_jobs=2,
)
tuner.fit({"training": training_input}, include_cls_metadata=False)
tuner.wait()
```
## Batch Transform Job
Now let's use the model built to run a batch inference job and verify it works.
### Batch Transform Input Preparation
The snippet below is removing the "label" column (column indexed at 0) and retaining the rest to be batch transform's input.
***NOTE:*** This is the same training data, which is a no-no from a ML science perspective. But the aim of this notebook is to demonstrate how things work end-to-end.
```
import pandas as pd
## Remove first column that contains the label
shape = pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None).drop([0], axis=1)
TRANSFORM_WORKDIR = "data/transform"
shape.to_csv(TRANSFORM_WORKDIR + "/batchtransform_test.csv", index=False, header=False)
transform_input = (
sagemaker_session.upload_data(TRANSFORM_WORKDIR, key_prefix=batch_inference_input_prefix)
+ "/batchtransform_test.csv"
)
print("Transform input uploaded to " + transform_input)
transformer = algo.transformer(1, "ml.m4.xlarge")
transformer.transform(transform_input, content_type="text/csv")
transformer.wait()
print("Batch Transform output saved to " + transformer.output_path)
```
#### Inspect the Batch Transform Output in S3
```
from urllib.parse import urlparse
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
file_key = "{}/{}.out".format(parsed_url.path[1:], "batchtransform_test.csv")
s3_client = sagemaker_session.boto_session.client("s3")
response = s3_client.get_object(Bucket=sagemaker_session.default_bucket(), Key=file_key)
response_bytes = response["Body"].read().decode("utf-8")
print(response_bytes)
```
## Live Inference Endpoint
Finally, we demonstrate the creation of an endpoint for live inference using this AWS Marketplace algorithm generated model
```
from sagemaker.predictor import csv_serializer
predictor = algo.deploy(1, "ml.m4.xlarge", serializer=csv_serializer)
```
### Choose some data and use it for a prediction
In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works.
```
shape = pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None)
import itertools
a = [50 * i for i in range(3)]
b = [40 + i for i in range(10)]
indices = [i + j for i, j in itertools.product(a, b)]
test_data = shape.iloc[indices[:-1]]
test_X = test_data.iloc[:, 1:]
test_y = test_data.iloc[:, 0]
```
Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us.
```
print(predictor.predict(test_X.values).decode("utf-8"))
```
### Cleanup the endpoint
```
algo.delete_endpoint()
```
| github_jupyter |
# Detrending, Stylized Facts and the Business Cycle
In an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as "structural time series models") to derive stylized facts of the business cycle.
Their paper begins:
"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step
in macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic
properties of the data and (2) present meaningful information."
In particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.
Statsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.
```
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from IPython.display import display, Latex
```
## Unobserved Components
The unobserved components model available in Statsmodels can be written as:
$$
y_t = \underbrace{\mu_{t}}_{\text{trend}} + \underbrace{\gamma_{t}}_{\text{seasonal}} + \underbrace{c_{t}}_{\text{cycle}} + \sum_{j=1}^k \underbrace{\beta_j x_{jt}}_{\text{explanatory}} + \underbrace{\varepsilon_t}_{\text{irregular}}
$$
see Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.
### Trend
The trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.
$$
\begin{align}
\underbrace{\mu_{t+1}}_{\text{level}} & = \mu_t + \nu_t + \eta_{t+1} \qquad & \eta_{t+1} \sim N(0, \sigma_\eta^2) \\\\
\underbrace{\nu_{t+1}}_{\text{trend}} & = \nu_t + \zeta_{t+1} & \zeta_{t+1} \sim N(0, \sigma_\zeta^2) \\
\end{align}
$$
where the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.
For both elements (level and trend), we can consider models in which:
- The element is included vs excluded (if the trend is included, there must also be a level included).
- The element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)
The only additional parameters to be estimated via MLE are the variances of any included stochastic components.
This leads to the following specifications:
| | Level | Trend | Stochastic Level | Stochastic Trend |
|----------------------------------------------------------------------|-------|-------|------------------|------------------|
| Constant | ✓ | | | |
| Local Level <br /> (random walk) | ✓ | | ✓ | |
| Deterministic trend | ✓ | ✓ | | |
| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |
| Local linear trend | ✓ | ✓ | ✓ | ✓ |
| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |
### Seasonal
The seasonal component is written as:
<span>$$
\gamma_t = - \sum_{j=1}^{s-1} \gamma_{t+1-j} + \omega_t \qquad \omega_t \sim N(0, \sigma_\omega^2)
$$</span>
The periodicity (number of seasons) is `s`, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.
The variants of this model are:
- The periodicity `s`
- Whether or not to make the seasonal effects stochastic.
If the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).
### Cycle
The cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between "1.5 and 12 years" (see Durbin and Koopman).
The cycle is written as:
<span>$$
\begin{align}
c_{t+1} & = c_t \cos \lambda_c + c_t^* \sin \lambda_c + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \\\\
c_{t+1}^* & = -c_t \sin \lambda_c + c_t^* \cos \lambda_c + \tilde \omega_t^* & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$</span>
The parameter $\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).
### Irregular
The irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.
$$
\varepsilon_t \sim N(0, \sigma_\varepsilon^2)
$$
In some cases, we may want to generalize the irregular component to allow for autoregressive effects:
$$
\varepsilon_t = \rho(L) \varepsilon_{t-1} + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2)
$$
In this case, the autoregressive parameters would also be estimated via MLE.
### Regression effects
We may want to allow for explanatory variables by including additional terms
<span>$$
\sum_{j=1}^k \beta_j x_{jt}
$$</span>
or for intervention effects by including
<span>$$
\begin{align}
\delta w_t \qquad \text{where} \qquad w_t & = 0, \qquad t < \tau, \\\\
& = 1, \qquad t \ge \tau
\end{align}
$$</span>
These additional parameters could be estimated via MLE or by including them as components of the state space formulation.
## Data
Following Harvey and Jaeger, we will consider the following time series:
- US real GNP, "output", ([GNPC96](https://research.stlouisfed.org/fred2/series/GNPC96))
- US GNP implicit price deflator, "prices", ([GNPDEF](https://research.stlouisfed.org/fred2/series/GNPDEF))
- US monetary base, "money", ([AMBSL](https://research.stlouisfed.org/fred2/series/AMBSL))
The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.
All data series considered here are taken from [Federal Reserve Economic Data (FRED)](https://research.stlouisfed.org/fred2/). Conveniently, the Python library [Pandas](http://pandas.pydata.org/) has the ability to download data from FRED directly.
```
# Datasets
from pandas.io.data import DataReader
# Get the raw data
start = '1948-01'
end = '2008-01'
us_gnp = DataReader('GNPC96', 'fred', start=start, end=end)
us_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)
us_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS')
recessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS', how='last').values[:,0]
# Construct the dataframe
dta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)
dta.columns = ['US GNP','US Prices','US monetary base']
dates = dta.index._mpl_repr()
```
To get a sense of these three variables over the timeframe, we can plot them:
```
# Plot the data
ax = dta.plot(figsize=(13,3))
ylim = ax.get_ylim()
ax.xaxis.grid()
ax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);
```
## Model
Since the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:
$$
y_t = \underbrace{\mu_{t}}_{\text{trend}} + \underbrace{c_{t}}_{\text{cycle}} + \underbrace{\varepsilon_t}_{\text{irregular}}
$$
The irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:
1. Local linear trend (the "unrestricted" model)
2. Smooth trend (the "restricted" model, since we are forcing $\sigma_\eta = 0$)
Below, we construct `kwargs` dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.
```
# Model specifications
# Unrestricted model, using string specification
unrestricted_model = {
'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Unrestricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# local linear trend model with a stochastic damped cycle:
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
# The restricted model forces a smooth trend
restricted_model = {
'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
}
# Restricted model, setting components directly
# This is an equivalent, but less convenient, way to specify a
# smooth trend model with a stochastic damped cycle. Notice
# that the difference from the local linear trend model is that
# `stochastic_level=False` here.
# unrestricted_model = {
# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,
# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True
# }
```
We now fit the following models:
1. Output, unrestricted model
2. Prices, unrestricted model
3. Prices, restricted model
4. Money, unrestricted model
5. Money, restricted model
```
# Output
output_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)
output_res = output_mod.fit(method='powell', disp=False)
# Prices
prices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)
prices_res = prices_mod.fit(method='powell', disp=False)
prices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)
prices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)
# Money
money_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)
money_res = money_mod.fit(method='powell', disp=False)
money_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)
money_restricted_res = money_restricted_mod.fit(method='powell', disp=False)
```
Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the `summary` method on the fit object.
```
print(output_res.summary())
```
For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.
The `plot_components` method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.
```
fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));
```
Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.
```
# Create Table I
table_i = np.zeros((5,6))
start = dta.index[0]
end = dta.index[-1]
time_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)
models = [
('US GNP', time_range, 'None'),
('US Prices', time_range, 'None'),
('US Prices', time_range, r'$\sigma_\eta^2 = 0$'),
('US monetary base', time_range, 'None'),
('US monetary base', time_range, r'$\sigma_\eta^2 = 0$'),
]
index = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])
parameter_symbols = [
r'$\sigma_\zeta^2$', r'$\sigma_\eta^2$', r'$\sigma_\kappa^2$', r'$\rho$',
r'$2 \pi / \lambda_c$', r'$\sigma_\varepsilon^2$',
]
i = 0
for res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):
if res.model.stochastic_level:
(sigma_irregular, sigma_level, sigma_trend,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
else:
(sigma_irregular, sigma_level,
sigma_cycle, frequency_cycle, damping_cycle) = res.params
sigma_trend = np.nan
period_cycle = 2 * np.pi / frequency_cycle
table_i[i, :] = [
sigma_level*1e7, sigma_trend*1e7,
sigma_cycle*1e7, damping_cycle, period_cycle,
sigma_irregular*1e7
]
i += 1
pd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')
table_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)
table_i
```
| github_jupyter |
# Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
**Instructions:**
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = None
m_test = None
num_px = None
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**Expected Output for m_train, m_test and num_px**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = None
test_set_x_flatten = None
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
**Expected Output**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
```
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
<font color='blue'>
**What you need to remember:**
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
**Mathematical expression of the algorithm**:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = None
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
**Expected Output**:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = None
b = None
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = None # compute activation
cost = None # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = None
db = None
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>
### d) Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = None
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = None
b = None
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.430462071679 </td>
</tr>
</table>
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There is two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = None
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
pass
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print ("predictions = " + str(predict(w, b, X)))
```
**Expected Output**:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
**What to remember:**
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = None
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = None
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = None
Y_prediction_train = None
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```
**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
## 7 - Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
<font color='blue'>
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!
Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
os.chdir('/Users/gianni/Google Drive/Bas Zahy Gianni - Games/Data')
oc = [
'index', 'subject', 'color', 'gi', 'mi',
'status', 'bp', 'wp', 'response', 'rt',
'time', 'mouse_t', 'mouse_x'
]
fc = [
'subject', 'is_comp', 'color', 'status',
'bp', 'wp', 'response', 'rt', 'gi', 'mi',
'computer', 'human', 'time'
]
mc = ['subject', 'color', 'bp', 'wp', 'response', 'rt', 'condition']
class Data():
""" Data is the primary object for holding experimental data. It also contains functions
for the loading, cleaning, augmentation, and export of the data tables. """
def __init__(self, folder):
self.data = self.load(folder)
def load_file(self, folder, file_name, mouse=False):
""" Initial preparation of data for individual files """
print(file_name[:-4])
# load file, drop nuissance columns, remove non-observations
drop_cols = ['index'] if mouse else ['index', 'mouse_t', 'mouse_x']
data = pd.read_csv(folder + file_name, names=oc).drop(drop_cols, axis=1)
drop_status = (data.status != 'dummy') & (data.status != 'ready') & (data.status != 'draw offer')
data = data.loc[drop_status, :].copy().reset_index(drop=True)
# assign unique subject label (from filename) and create separate cols for humans and computers
sub_filter = data.rt > 0
comp_filter = data.rt == 0
first_move_filter = data.bp.map(lambda x: np.array(list(x)).astype(int).sum()==1) #(data.mi == 0) & (data.gi%2 == 0)
second_move_filter = data.bp.map(lambda x: np.array(list(x)).astype(int).sum()==2) #(data.mi == 1) & (data.gi%2 == 0)
condition_filter = (data.rt>0)&(data.status == 'playing')
data.loc[condition_filter, 'condition'] = data.loc[condition_filter, 'subject'].map(lambda x: x[-1])
data.loc[:, 'condition'] = data.loc[:, 'condition'].fillna(method='ffill')
data.loc[data.rt > 0, 'subject'] = file_name[:-4]
data.loc[:, 'human'] = file_name[:-4]
data.loc[:, 'computer'] = np.nan
data.loc[comp_filter, 'computer'] = data.loc[comp_filter, 'subject']
data.loc[first_move_filter, 'computer'] = data.loc[second_move_filter, 'computer']
data.loc[:, 'computer'] = data.loc[:, 'computer'].fillna(method='ffill')
data.loc[0, 'computer'] = data.loc[1, 'computer']
return data
def load(self, folder):
""" Calls other functions to corrale data and some support information """
self.exp_name = folder
files = os.listdir(folder + '/Raw/')
files = [f for f in files if f[-3:] == 'csv']
# files =[f for f in files if f[:-4] != 'HH']
self.subjects = [f[:-4] for f in files]
self.subject_dict = dict(zip(self.subjects, np.arange(len(self.subjects))))
data = pd.concat([self.load_file(folder + '/Raw/', f) for f in files])
data = data.reset_index(drop=True)
data = self.clean(data)
return data
def clean(self, df):
""" Performs further cleaning that can be done on all data collectively """
# anonymize subjects
sub_filter = df.rt > 0 # filter computers out
df.loc[sub_filter, 'subject'] = df.loc[sub_filter, 'subject'].map(self.subject_dict)
df.loc[:, 'human'] = df.loc[:, 'human'].map(self.subject_dict)
# give computers identifiable names
comp_filter = df.rt == 0
df.loc[comp_filter, 'subject'] = df.loc[comp_filter, 'subject'].astype(int) + 1000
df.loc[pd.notnull(df.computer), 'computer'] = df.loc[pd.notnull(df.computer), 'computer'].astype(int) + 1000
# force remove response from board
for i in df.loc[df.status != 'EVAL', :].index.values:
if df.loc[i,"color"] == 0:
l = list(df.loc[i,"bp"])
l[df.loc[i, "response"]] = '0'
df.loc[i,"bp"] = ''.join(l)
else:
l = list(df.loc[i,"wp"])
l[df.loc[i,"response"]] = '0'
df.loc[i,"wp"] = ''.join(l)
# force correct colors
count_pieces = lambda x: np.array([np.array(list(df.loc[i, x])).astype(int).sum() for i in df.index.values])
df.loc[:, 'color'] = count_pieces('bp') - count_pieces('wp')
df.loc[:, 'color'] = df.loc[:, 'color'].astype(int).astype(str)
# add is_comp
is_computer = lambda x: "0" if x > 0 else "1"
df.loc[:, 'is_comp'] = df.loc[:, 'rt'].map(is_computer)
# correct move index in games
df.loc[df.status.isin(['playing', 'win', 'draw', 'timeout']), 'mi'] = df.loc[df.status.isin(['playing', 'win', 'draw']), 'mi'] - 1
return df
def export_individuals(self, folder):
for s, i in self.subject_dict.items():
c = self.data.human == i
d = self.data.loc[c, :].reset_index(drop=True)
d = d.reindex_axis(self.full_output_columns, axis=1)
d.to_csv(folder + '/Clean/' + s + '.csv', index=False)
return None
def export(self, folder):
f = folder + 'Clean/_summaries/'
E = self.data.loc[self.data.status.isin(['playing', 'win', 'draw', 'timeout']), :]
E.loc[:, fc].to_csv(f + 'all_fields.csv', index=False)
E.loc[:, mc].to_csv(f + 'model_fields.csv', index=False)
return None
D = Data('./5_tai')
D.export('./5_tai/')
```
| github_jupyter |
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment! It's time to build your first neural network, which will have one hidden layer. Now, you'll notice a big difference between this model and the one you implemented previously using logistic regression.
By the end of this assignment, you'll be able to:
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## Table of Contents
- [1 - Packages](#1)
- [2 - Load the Dataset](#2)
- [Exercise 1](#ex-1)
- [3 - Simple Logistic Regression](#3)
- [4 - Neural Network model](#4)
- [4.1 - Defining the neural network structure](#4-1)
- [Exercise 2 - layer_sizes](#ex-2)
- [4.2 - Initialize the model's parameters](#4-2)
- [Exercise 3 - initialize_parameters](#ex-3)
- [4.3 - The Loop](#4-3)
- [Exercise 4 - forward_propagation](#ex-4)
- [4.4 - Compute the Cost](#4-4)
- [Exercise 5 - compute_cost](#ex-5)
- [4.5 - Implement Backpropagation](#4-5)
- [Exercise 6 - backward_propagation](#ex-6)
- [4.6 - Update Parameters](#4-6)
- [Exercise 7 - update_parameters](#ex-7)
- [4.7 - Integration](#4-7)
- [Exercise 8 - nn_model](#ex-8)
- [5 - Test the Model](#5)
- [5.1 - Predict](#5-1)
- [Exercise 9 - predict](#ex-9)
- [5.2 - Test the Model on the Planar Dataset](#5-2)
- [6 - Tuning hidden layer size (optional/ungraded exercise)](#6)
- [7- Performance on other datasets](#7)
<a name='1'></a>
# 1 - Packages
First import all the packages that you will need during this assignment.
- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import copy
import matplotlib.pyplot as plt
from testCases_v2 import *
from public_tests import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(2) # set a seed so that the results are consistent
%load_ext autoreload
%autoreload 2
```
<a name='2'></a>
# 2 - Load the Dataset
Now, load the dataset you'll be working on. The following code will load a "flower" 2-class dataset into variables X and Y.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
First, get a better sense of what your data is like.
<a name='ex-1'></a>
### Exercise 1
How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
# (≈ 3 lines of code)
# shape_X = ...
# shape_Y = ...
# training set size
# m = ...
# YOUR CODE STARTS HERE
shape_X = X.shape
shape_Y = Y.shape
m = Y.shape[1]
# YOUR CODE ENDS HERE
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
print(X.shape[0])
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> shape of X </td>
<td> (2, 400) </td>
</tr>
<tr>
<td>shape of Y</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>m</td>
<td> 400 </td>
</tr>
</table>
<a name='3'></a>
## 3 - Simple Logistic Regression
Before building a full neural network, let's check how logistic regression performs on this problem. You can use sklearn's built-in functions for this. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
```
You can now plot the decision boundary of these models! Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
print(X.shape)
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>Accuracy</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
<a name='4'></a>
## 4 - Neural Network model
Logistic regression didn't work well on the flower dataset. Next, you're going to train a Neural Network with a single hidden layer and see how that handles the same problem.
**The model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
In practice, you'll often build helper functions to compute steps 1-3, then merge them into one function called `nn_model()`. Once you've built `nn_model()` and learned the right parameters, you can make predictions on new data.
<a name='4-1'></a>
### 4.1 - Defining the neural network structure ####
<a name='ex-2'></a>
### Exercise 2 - layer_sizes
Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
#(≈ 3 lines of code)
# n_x = ...
# n_h = ...
# n_y = ...
# YOUR CODE STARTS HERE
n_x = X.shape[0]
n_h = 4
n_y = Y.shape[0]
print(Y.shape)
# YOUR CODE ENDS HERE
return (n_x, n_h, n_y)
t_X, t_Y = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(t_X, t_Y)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
layer_sizes_test(layer_sizes)
```
***Expected output***
```
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
```
<a name='4-2'></a>
### 4.2 - Initialize the model's parameters ####
<a name='ex-3'></a>
### Exercise 3 - initialize_parameters
Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_test(initialize_parameters)
```
**Expected output**
```
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[0.]]
```
<a name='4-3'></a>
### 4.3 - The Loop
<a name='ex-4'></a>
### Exercise 4 - forward_propagation
Implement `forward_propagation()` using the following equations:
$$Z^{[1]} = W^{[1]} X + b^{[1]}\tag{1}$$
$$A^{[1]} = \tanh(Z^{[1]})\tag{2}$$
$$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}\tag{3}$$
$$\hat{Y} = A^{[2]} = \sigma(Z^{[2]})\tag{4}$$
**Instructions**:
- Check the mathematical representation of your classifier in the figure above.
- Use the function `sigmoid()`. It's built into (imported) this notebook.
- Use the function `np.tanh()`. It's part of the numpy library.
- Implement using these steps:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()` by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
```
# GRADED FUNCTION:forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
# YOUR CODE ENDS HERE
# Implement Forward Propagation to calculate A2 (probabilities)
# (≈ 4 lines of code)
# Z1 = ...
# A1 = ...
# Z2 = ...
# A2 = ...
# YOUR CODE STARTS HERE
Z1 = np.dot(W1,X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2,A1) + b2
A2 = sigmoid(Z2)
# YOUR CODE ENDS HERE
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
t_X, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(t_X, parameters)
print("A2 = " + str(A2))
forward_propagation_test(forward_propagation)
```
***Expected output***
```
A2 = [[0.21292656 0.21274673 0.21295976]]
```
<a name='4-4'></a>
### 4.4 - Compute the Cost
Now that you've computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for all examples, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
<a name='ex-5'></a>
### Exercise 5 - compute_cost
Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. This is one way to implement one part of the equation without for loops:
$- \sum\limits_{i=1}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs)
```
- Use that to build the whole expression of the cost function.
**Notes**:
- You can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
- If you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array.
- You can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array).
- You can also cast the array as a type `float` using `float()`.
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of examples
# Compute the cross-entropy cost
# (≈ 2 lines of code)
# logprobs = ...
# cost = ...
# YOUR CODE STARTS HERE
cost = (-1/m)*(np.dot(Y, np.log(A2).T) + np.dot(1-Y, np.log(1-A2).T))
# YOUR CODE ENDS HERE
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
return cost
A2, t_Y = compute_cost_test_case()
cost = compute_cost(A2, t_Y)
print("cost = " + str(compute_cost(A2, t_Y)))
compute_cost_test(compute_cost)
```
***Expected output***
`cost = 0.6930587610394646`
<a name='4-5'></a>
### 4.5 - Implement Backpropagation
Using the cache computed during forward propagation, you can now implement backward propagation.
<a name='ex-6'></a>
### Exercise 6 - backward_propagation
Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<caption><center><font color='purple'><b>Figure 1</b>: Backpropagation. Use the six equations on the right.</font></center></caption>
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
#(≈ 2 lines of code)
# W1 = ...
# W2 = ...
# YOUR CODE STARTS HERE
W1 = parameters['W1']
W2 = parameters['W2']
# YOUR CODE ENDS HERE
# Retrieve also A1 and A2 from dictionary "cache".
#(≈ 2 lines of code)
# A1 = ...
# A2 = ...
# YOUR CODE STARTS HERE
A1 = cache['A1']
A2 = cache['A2']
# YOUR CODE ENDS HERE
# Backward propagation: calculate dW1, db1, dW2, db2.
#(≈ 6 lines of code, corresponding to 6 equations on slide above)
# dZ2 = ...
# dW2 = ...
# db2 = ...
# dZ1 = ...
# dW1 = ...
# db1 = ...
# YOUR CODE STARTS HERE
dZ2 = A2 - Y
dW2 = (1/m)*(np.dot(dZ2, A1.T))
db2 = (1/m)*(np.sum(dZ2, axis = 1, keepdims=True))
dZ1 = np.dot(W2.T, dZ2)*(1 - np.power(A1, 2))
dW1 = (1/m)*(np.dot(dZ1, X.T))
db1 = (1/m)*(np.sum(dZ1, axis = 1, keepdims=True))
# YOUR CODE ENDS HERE
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, t_X, t_Y = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, t_X, t_Y)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
backward_propagation_test(backward_propagation)
```
***Expected output***
```
dW1 = [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]]
db1 = [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]]
dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
```
<a name='4-6'></a>
### 4.6 - Update Parameters
<a name='ex-7'></a>
### Exercise 7 - update_parameters
Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $\theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
<caption><center><font color='purple'><b>Figure 2</b>: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.</font></center></caption>
**Hint**
- Use `copy.deepcopy(...)` when copying lists or dictionaries that are passed as parameters to functions. It avoids input parameters being modified within the function. In some scenarios, this could be inefficient, but it is required for grading purposes.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve a copy of each parameter from the dictionary "parameters". Use copy.deepcopy(...) for W1 and W2
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# YOUR CODE ENDS HERE
# Retrieve each gradient from the dictionary "grads"
#(≈ 4 lines of code)
# dW1 = ...
# db1 = ...
# dW2 = ...
# db2 = ...
# YOUR CODE STARTS HERE
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
# YOUR CODE ENDS HERE
# Update rule for each parameter
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = W1 - learning_rate*dW1
b1 = b1 - learning_rate*db1
W2 = W2 - learning_rate*dW2
b2 = b2 - learning_rate*db2
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
update_parameters_test(update_parameters)
```
***Expected output***
```
W1 = [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]
b1 = [[-1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[-3.20136836e-06]]
W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
b2 = [[0.00010457]]
```
<a name='4-7'></a>
### 4.7 - Integration
Integrate your functions in `nn_model()`
<a name='ex-8'></a>
### Exercise 8 - nn_model
Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
#(≈ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters = initialize_parameters(n_x, n_h, n_y)
# YOUR CODE ENDS HERE
# Loop (gradient descent)
for i in range(0, num_iterations):
#(≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
# A2, cache = ...
# Cost function. Inputs: "A2, Y". Outputs: "cost".
# cost = ...
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
# grads = ...
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
# parameters = ...
# YOUR CODE STARTS HERE
A2, cache = forward_propagation(X, parameters)
cost = compute_cost(A2, Y)
grads = backward_propagation(parameters, cache, X, Y)
parameters = update_parameters(parameters, grads)
# YOUR CODE ENDS HERE
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
t_X, t_Y = nn_model_test_case()
parameters = nn_model(t_X, t_Y, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
nn_model_test(nn_model)
```
***Expected output***
```
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000218
Cost after iteration 2000: 0.000107
...
Cost after iteration 8000: 0.000026
Cost after iteration 9000: 0.000023
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]
b1 = [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]]
W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]
b2 = [[0.20459656]]
```
<a name='5'></a>
## 5 - Test the Model
<a name='5-1'></a>
### 5.1 - Predict
<a name='ex-9'></a>
### Exercise 9 - predict
Predict with your model by building `predict()`.
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
#(≈ 2 lines of code)
# A2, cache = ...
# predictions = ...
# YOUR CODE STARTS HERE
A2, cache = forward_propagation(X, parameters)
predictions = (A2 > 0.5)
# YOUR CODE ENDS HERE
return predictions
parameters, t_X = predict_test_case()
predictions = predict(parameters, t_X)
print("Predictions: " + str(predictions))
predict_test(predict)
```
***Expected output***
```
Predictions: [[ True False True]]
```
<a name='5-2'></a>
### 5.2 - Test the Model on the Planar Dataset
It's time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units!
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y, predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size) * 100) + '%')
```
**Expected Output**:
<table style="width:30%">
<tr>
<td><b>Accuracy</b></td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learned the patterns of the flower's petals! Unlike logistic regression, neural networks are able to learn even highly non-linear decision boundaries.
### Congrats on finishing this Programming Assignment!
Here's a quick recap of all you just accomplished:
- Built a complete 2-class classification neural network with a hidden layer
- Made good use of a non-linear unit
- Computed the cross entropy loss
- Implemented forward and backward propagation
- Seen the impact of varying the hidden layer size, including overfitting.
You've created a neural network that can learn patterns! Excellent work. Below, there are some optional exercises to try out some other hidden layer sizes, and other datasets.
<a name='6'></a>
## 6 - Tuning hidden layer size (optional/ungraded exercise)
Run the following code(it may take 1-2 minutes). Then, observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.
- Later, you'll become familiar with regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right.
**Some optional/ungraded questions that you can explore if you wish**:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<a name='7'></a>
## 7- Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
```
**References**:
- http://scs.ryerson.ca/~aharley/neural-networks/
- http://cs231n.github.io/neural-networks-case-study/
| github_jupyter |
```
import numpy as np
import datetime
import matplotlib.pyplot as plt
from PIL import Image
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from numpy.linalg import norm
from sklearn.feature_extraction import image
import warnings
warnings.filterwarnings("ignore")
def get_B_and_weight_vec(n_nodes,threshold,sigma=1):
'''
Generate graph structure from the image to be segmented.
Inputs:
n_nodes: number of nodes, i.e. number of pixels
threshold: threshold to drop edges with small weights (weak similarities)
sigma: parameter to scale edge weights
Outputs:
B: Incidence matrix
Weight_vec: edge_wise weights
'''
N = n_nodes
row = []
col = []
data = []
weight_vec = []
cnt = 0
#
for i in range(N):
for j in [i+1,i+100]:
if j>=2900:
continue
if np.exp(-norm(img[i]-img[j])/(2*sigma**2)) > threshold:
row.append(cnt)
col.append(i)
data.append(1)
row.append(cnt)
col.append(j)
data.append(-1)
cnt += 1
weight_vec.append(np.exp(-norm(img[i]-img[j])/(2*sigma**2)))
B = csr_matrix((data, (row, col)), shape=(cnt, N))
weight_vec = np.array(weight_vec)
return B, weight_vec
def algorithm(B, weight_vec, seeds,K=15000,alpha=0.02, lambda_nLasso=None, check_s=False):
E, N = B.shape
# weight_vec = np.ones(E)
Gamma_vec = np.array(1./(np.sum(abs(B), 0)))[0] # \in [0, 1]
Gamma = np.diag(Gamma_vec)
Sigma = 0.5
seednodesindicator= np.zeros(N)
seednodesindicator[seeds] = 1
noseednodeindicator = np.ones(N)
noseednodeindicator[seeds] = 0
if lambda_nLasso == None:
lambda_nLasso = 2 / math.sqrt(np.sum(weight_vec))
if check_s:
s = 0.0
for item in range(len(weight_vec)):
x = B[item].toarray()[0]
i = np.where(x == -1)[0][0]
j = np.where(x == 1)[0][0]
if i < N1 <= j:
s += weight_vec[item]
elif i >= N1 > j:
s += weight_vec[item]
if lambda_nLasso * s >= alpha * N2 / 2:
print ('eq(24)', lambda_nLasso * s, alpha * N2 / 2)
fac_alpha = 1./(Gamma_vec*alpha+1) # \in [0, 1]
hatx = np.zeros(N)
newx = np.zeros(N)
prevx = np.zeros(N)
haty = np.array([x/(E-1) for x in range(0, E)])
history = []
for iterk in range(K):
# if 0 < np.max(abs(newx - prevx)) < 1e-4:
# print(iterk)
# break
tildex = 2 * hatx - prevx
newy = haty + Sigma * B.dot(tildex) # chould be negative
haty = newy / np.maximum(abs(newy) / (lambda_nLasso * weight_vec), np.ones(E)) # could be negative
newx = hatx - Gamma_vec * B.T.dot(haty) # could be negative
newx[seeds] = (newx[seeds] + Gamma_vec[seeds]) / (1 + Gamma_vec[seeds])
newx = seednodesindicator * newx + noseednodeindicator * (newx * fac_alpha)
prevx = np.copy(hatx)
hatx = newx # could be negative
history.append(newx)
history = np.array(history)
return history
#load the image
img=Image.open("stripes.png")
```
# Preprocess the image
```
# resize the image
basewidth = 100
wpercent = (basewidth / float(img.size[0]))
hsize = int((float(img.size[1]) * float(wpercent)))
img = img.resize((basewidth, hsize), Image.ANTIALIAS)
img = np.array(img)[:,:,:3]
print(img.shape)
plt.imshow(img)
img = img.reshape(-1,3)
img.shape
```
# Perform the segmentation task via Kmeans
```
kmeans = KMeans(n_clusters=2).fit(img)
plt.imshow(kmeans.labels_.reshape(29,100))
```
# Perform the task via our algorithm
```
# generate graph from image
img = img.reshape(-1,3)/255
n_nodes=img.shape[0]
print("number of nodes:",n_nodes )
B,weight=get_B_and_weight_vec(n_nodes,0.2,1)
# plt.hist(weight,bins=30) #distribution of similarity measure
def run_seg(n_nodes,seeds,threshold, K=30, alpha=0.1, lambda_nLasso=0.1):
B, weight_vec = get_B_and_weight_vec(n_nodes,threshold)
start = datetime.datetime.now()
history = algorithm(B, weight_vec, seeds=seeds, K=K, alpha=alpha, lambda_nLasso=lambda_nLasso)
print('our method time: ', datetime.datetime.now() - start)
return history
# generate seeds according to the labels assigned by kmeans
seeds = np.random.choice(np.where(kmeans.labels_==0)[0],20)
# run our algorithm and visulize the result before feed it to kmeans
history = run_seg(n_nodes=n_nodes,seeds=seeds,threshold = 0.95, K=1000,alpha=0.01, lambda_nLasso=1)
plt.imshow(history[-1].reshape(29,100))
# Feed the node signal from our algorithm to kmeans to complete clustering (2 clusters)
history=np.nan_to_num(history)
kmeans = KMeans(n_clusters=2).fit(history[-1].reshape(len(history[-1]), 1))
#visulize the segmentation result
segmented = kmeans.labels_
plt.imshow(segmented.reshape((29,100)))
```
# Perform the segmentation task via spectral clustering
```
from sklearn.cluster import SpectralClustering
s=SpectralClustering(2).fit(img)
plt.imshow(s.labels_.reshape(29,100))
# Python3 Program to print BFS traversal
# from a given source vertex. BFS(int s)
# traverses vertices reachable from s.
from collections import defaultdict
# This class represents a directed graph
# using adjacency list representation
class Graph:
# Constructor
def __init__(self):
# default dictionary to store graph
self.graph = defaultdict(list)
# function to add an edge to graph
def addEdge(self,u,v):
self.graph[u].append(v)
# Function to print a BFS of graph
def BFS(self, s):
# Mark all the vertices as not visited
visited = [False] * (max(self.graph) + 1)
# Create a queue for BFS
queue = []
# Mark the source node as
# visited and enqueue it
queue.append(s)
visited[s] = True
while queue:
# Dequeue a vertex from
# queue and print it
s = queue.pop(0)
print (s, end = " ")
# Get all adjacent vertices of the
# dequeued vertex s. If a adjacent
# has not been visited, then mark it
# visited and enqueue it
for i in self.graph[s]:
if visited[i] == False:
queue.append(i)
visited[i] = True
# Driver code
# Create a graph given in
# the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print ("Following is Breadth First Traversal"
" (starting from vertex 2)")
g.BFS(2)
# This code is contributed by Neelam Yadav
```
| github_jupyter |
# Solution Graded Exercise 1: Leaky-integrate-and-fire model
first name: Eve
last name: Rahbe
sciper: 235549
date: 21.03.2018
*Your teammate*
first name of your teammate: Antoine
last name of your teammate: Alleon
sciper of your teammate: 223333
Note: You are allowed to discuss the concepts with your class mates. You are not allowed to share code. You have to understand every line of code you write in this notebook. We will ask you questions about your submission during a fraud detection session during the last week of the semester.
If you are asked for plots: The appearance of the plots (labelled axes, useful scaling etc.) is important!
If you are asked for discussions: Answer in a precise way and try to be concise.
** Submission **
Rename this notebook to Ex2_FirstName_LastName_Sciper.ipynb and upload that single file on moodle before the deadline.
** Link to the exercise **
http://neuronaldynamics-exercises.readthedocs.io/en/stable/exercises/leaky-integrate-and-fire.html
# Exercise 2, getting started
```
%matplotlib inline
import brian2 as b2
import matplotlib.pyplot as plt
import numpy as np
from neurodynex.leaky_integrate_and_fire import LIF
from neurodynex.tools import input_factory, plot_tools
LIF.getting_started()
LIF.print_default_parameters()
```
# 2.1 Exercise: minimal current
## 2.1.1. Question: minimal current (calculation)
#### [2 points]
```
from neurodynex.leaky_integrate_and_fire import LIF
print("resting potential: {}".format(LIF.V_REST))
i_min = (LIF.FIRING_THRESHOLD-LIF.V_REST)/LIF.MEMBRANE_RESISTANCE
print("minimal current i_min: {}".format(i_min))
```
The minimal current is :
$i_{min} = \frac{\theta-u_{rest}}{R} = \frac{-50-(-70) [mV]}{10 [Mohm]} = 2 [nA]$
$\theta$ is the firing threshold
$u_{rest}$ is the resting potential
$R$ is the membrane resistance
## 2.1.2. Question: minimal current (simulation)
#### [2 points]
```
# create a step current with amplitude= i_min
step_current = input_factory.get_step_current(
t_start=5, t_end=100, unit_time=b2.ms,
amplitude= i_min) # set i_min to your value
# run the LIF model.
# Note: As we do not specify any model parameters, the simulation runs with the default values
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 100 * b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(
state_monitor, step_current, title="min input", firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0])) # should be 0
```
# 2.2. Exercise: f-I Curve
## 2.2.1. Question: f-I Curve and refractoryness
1 - Sketch or plot the curve with some program. You don't have to include it here, it is just for your understanding and will not be graded.
2 - What is the maximum rate at which this neuron can fire?
#### [3 points]
```
# create a step current with amplitude i_max
i_max = 125 * b2.namp
step_current = input_factory.get_step_current(
t_start=5, t_end=100, unit_time=b2.ms,
amplitude=i_max)
# run the LIF model and set the absolute refractory period to 3ms
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 500 * b2.ms,
abs_refractory_period=3 * b2.ms)
# number of spikes
print("nr of spikes: {}".format(spike_monitor.count[0]))
# firing frequency
T = 95e-03/spike_monitor.count[0]
print("T : {}".format(T))
print("firing frequency : {}".format(1/T))
```
The maximum rate at which this neuron can fire is $f = 336.84 [Hz]$.
3 - Inject currents of different amplitudes (from 0nA to 100nA) into a LIF neuron.
For each current, run the simulation for 500ms and determine the firing frequency in Hz. Then plot the f-I curve.
#### [4 points]
```
import numpy as np
import matplotlib.pyplot as plt
firing_frequency = []
# create a step current with amplitude i from 0 to 100 nA
for i in range(0,100,1) :
step_current = input_factory.get_step_current(
t_start=5, t_end=100, unit_time=b2.ms,
amplitude=i * b2.namp) # stock amplitude i from 0 to 100 nA
# run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 500 * b2.ms,
abs_refractory_period= 3 * b2.ms)
if (spike_monitor.count[0] == 0) :
firing_frequency.append(0)
else :
# firing frequency
T = 95e-03/spike_monitor.count[0]
firing_frequency.append(1/T)
plt.xlabel("step current amplitude [nA]")
plt.ylabel("firing frequency [Hz]")
plt.title("f-I curve for step current")
plt.plot(range(0,100,1), firing_frequency)
```
# 2.3. Exercise: “Experimentally” estimate the parameters of a LIF neuron
## 2.3.1. Question: “Read” the LIF parameters out of the vm plot
#### [6 points]
My estimates for the parameters :
\begin{itemize}
\item Resting potential : $u_{rest} = -66 [mV]$.
\item Reset potential : $u_{reset} = -63 [mV]$ is the membrane potential between spikes.
\item Firing threshold : $\theta = -38 [mV]$ using $\theta = u(t_{inf})$ with step current amplitude $i_{min}$.
\item Membrane resitance : $R = \frac{\theta-u_{rest}}{i_{min}} = 12.7 [Mohm]$ with $i_{min}=2.2[nA]$ at the firing threshold.
\item Membrane time scale : $\tau = 11.5 [ms]$ time to reach $63\%$ of $\theta$ with step curent amplitude $i_{min}$.
\item Absolute refractory period : $ t = 5 [ms]$ time before new spike after reset of the potential.
\end{itemize}
```
# get a random parameter. provide a random seed to have a reproducible experiment
random_parameters = LIF.get_random_param_set(random_seed=432)
# define your test current
test_current = input_factory.get_step_current(
t_start=5, t_end=100, unit_time=b2.ms, amplitude= 10 * b2.namp)
# probe the neuron. pass the test current AND the random params to the function
state_monitor, spike_monitor = LIF.simulate_random_neuron(test_current, random_parameters)
# plot
plot_tools.plot_voltage_and_current_traces(state_monitor, test_current, title="experiment")
# print the parameters to the console and compare with your estimates
LIF.print_obfuscated_parameters(random_parameters)
```
# 2.4. Exercise: Sinusoidal input current and subthreshold response
## 2.4.1. Question
#### [5 points]
```
# note the higher resolution when discretizing the sine wave: we specify unit_time=0.1 * b2.ms
sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms,
amplitude= 2.5 * b2.namp, frequency=250*b2.Hz,
direct_current=0. * b2.namp)
# run the LIF model. By setting the firing threshold to to a high value, we make sure to stay in the linear (non spiking) regime.
(state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms,
firing_threshold=0*b2.mV)
# plot the membrane voltage
plot_tools.plot_voltage_and_current_traces(state_monitor, sinusoidal_current, title = "Sinusoidal input current")
print("nr of spikes: {}".format(spike_monitor.count[0]))
# Calculate the amplitude of the membrane voltage
# get the difference betwwen min value of the voltage and the resting potential
print("Amplitude of the membrane voltage : {} V" .format(abs(np.min(np.asarray(state_monitor.v))-(-0.07))))
import scipy.signal
# Calculate the phase of the membrane voltage
# interpolation of the signals
xx = np.interp(np.linspace(1,1002,1002),np.linspace(1,1200,1200),np.transpose(np.asarray(state_monitor.v))[:,0])
# correlation
corr = scipy.signal.correlate(xx,sinusoidal_current.values[:,0])
dt = np.arange(-1001,1002)
# find the max correlation
recovered_time_shift = dt[corr.argmax()]
# convert timeshift in phase between 0 and 2pi
period = 1/0.250
recovered_phase_shift = np.pi*(((0.5 + recovered_time_shift/period) %1.0)-0.5)
print("Phase shift : {}" .format(recovered_phase_shift))
```
The results are :
$ A = 2 [mV]$ (computationnally and visually) and $phase = -\pi/2$ (computationnally and visually).
## 2.4.2. Question
#### [5 points]
```
# For input frequencies between 10Hz and 1kHz plot the resulting amplitude of subthreshold oscillations of the
# membrane potential vs. input frequency.
amplitude = []
for i in range(15) :
sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms,
amplitude= 2.5 * b2.namp, frequency=10.**(1.+i/7.)*b2.Hz,
direct_current=0. * b2.namp)
# run the LIF model.
(state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms,
firing_threshold=0*b2.mV)
amplitude.append(abs(np.min(np.asarray(state_monitor.v))-(-0.07)))
plt.xlabel("sinusoidal current frequency [Hz]")
plt.ylabel("amplitude of the membrane potential [mV]")
plt.title("Amplitude vs Input frequency")
plt.plot([10.**(1.+i/7.) for i in range(15)], amplitude)
```
## 2.4.3. Question
#### [5 points]
```
# For input frequencies between 10Hz and 1kHz
# plot the resulting phase shift of subthreshold oscillations of the membrane potential vs. input frequency.
phase = []
for f in [10,50,100,250,500,750,1000] :
sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms,
amplitude= 2.5 * b2.namp, frequency = f*b2.Hz,
direct_current=0. * b2.namp)
# run the LIF model.
(state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms,
firing_threshold=0*b2.mV)
xx = np.interp(np.linspace(1,1002,1002),np.linspace(1,1200,1200),np.transpose(np.asarray(state_monitor.v))[:,0])
corr = scipy.signal.correlate(xx,sinusoidal_current.values[:,0])
dt = np.arange(-1001,1002)
recovered_time_shift = dt[corr.argmax()]
period = 1000/f
recovered_phase_shift = np.pi*(((0.5 + recovered_time_shift/period) %1.0)-0.5)
phase.append(recovered_phase_shift)
plt.xlabel("sinusoidal current frequency [Hz]")
plt.ylabel("phase shift between membrane potential and input current")
plt.title("Phase shift of membrane potential vs Input frequency")
plt.plot([10,50,100,250,500,750,1000], phase)
```
## 2.4.4. Question
#### [3 points]
It is a \textbf{Low-Pass} filter because it amplifies low frequencies and attenuates high frequencies.
# 2.5 Leaky integrate-and-fire neuron with noisy input
This exercise is not available online. All information is given here.
So far you have explored the leaky integrate-and-fire model with step and sinusoidal input currents. We will now investigate the same neuron model with noisy input.
The voltage equation now is:
\begin{eqnarray}
\tau \frac{du}{dt} = -u(t) + u_{rest} + RI(t) + RI_{noise}(t)
\end{eqnarray}
where the noise is simply an additional term.
To implement the noise term in the above equation we will consider it as 'white noise', $I_{noise}(t) = \sigma \xi(t)$. White noise $\xi$ is a stochastic process with expectation value $<\xi(t)>=0$ and autocorrelation $<\xi(t)\xi(t+\Delta)>=\delta(\Delta)$ Note that, as we saw in the Exercise set of Week 1, the $\delta$-function has units of $1/time$, so $\xi$ has units of $1/\sqrt{time}$.
It can be shown that the discrete time implementation of a noisy voltage trajectory is:
\begin{eqnarray}
du = (-u(t) + u_{rest} + RI(t))\frac{dt}{\tau} + \frac{R}{\tau}\sigma \sqrt{dt}\ y,
\end{eqnarray}
where $y \sim \mathcal{N}(0, 1)$
We can then write, again for implementational purposes:
\begin{eqnarray}
du = \big[-u(t) + u_{rest} + R(I(t) + \sigma \frac{1}{\sqrt{dt}} y) \big]\frac{dt}{\tau}
\end{eqnarray}
Note that for the physical units to be consistent $\sigma$ in our formulation has units of $current * \sqrt{time}$.
Details of the above are beyond the scope of this exercise. If you would like to get more insights we refer to the paragraph 8.1 of the book (http://neuronaldynamics.epfl.ch/online/Ch8.S1.html), to http://www.scholarpedia.org/article/Stochastic_dynamical_systems#Ornstein-Uhlenbeck_process and regarding the implementational scaling of the noise to http://brian2.readthedocs.io/en/stable/user/models.html#time-scaling-of-noise.
### 2.5.1 Noisy step input current
#### [7 points]
1 - Implement the noisy current $I_0 + I_{noise}$ as described above. In order to do this edit the function get_noisy_step_current provided below. This is simply a copy of the code of the function get_step_current that you used earlier, and you just need to add the noisy part of the current at the indicated line (indicated by "???").
Then create a noisy step current with amplitude $I_0 = 1.5nA$ and $\sigma = 1 nA* \sqrt{\text{your time unit}}$ (e.g.: time_unit = 1 ms), run the LIF model and plot the input current and the membrane potential, as you did in the previous exercises.
```
def get_noisy_step_current(t_start, t_end, unit_time, amplitude, sigma, append_zero=True):
"""Creates a step current with added noise. If t_start == t_end, then a single
entry in the values array is set to amplitude.
Args:
t_start (int): start of the step
t_end (int): end of the step
unit_time (Quantity, Time): unit of t_start and t_end. e.g. 0.1*brian2.ms
amplitude (Quantity): amplitude of the step. e.g. 3.5*brian2.uamp
sigma (float): amplitude (std) of the noise. e.g. 0.1*b2.uamp
append_zero (bool, optional): if true, 0Amp is appended at t_end+1.
Without that trailing 0, Brian reads out the last value in the array (=amplitude) for all indices > t_end.
Returns:
TimedArray: Brian2.TimedArray
"""
assert isinstance(t_start, int), "t_start_ms must be of type int"
assert isinstance(t_end, int), "t_end must be of type int"
assert b2.units.fundamentalunits.have_same_dimensions(amplitude, b2.amp), \
"amplitude must have the dimension of current e.g. brian2.uamp"
tmp_size = 1 + t_end # +1 for t=0
if append_zero:
tmp_size += 1
tmp = np.zeros((tmp_size, 1)) * b2.amp
tmp[t_start] = amplitude
for i in range(t_start+1, t_end) :
tmp[i] = amplitude + sigma*(time_step**(-0.5))*np.random.randn()
curr = b2.TimedArray(tmp, dt= unit_time)
return curr
# -------------------
amplitude = 1.5*b2.nA
time_unit = 1.*b2.ms
time_step = 1.*b2.ms
sigma = 1*b2.nA*time_unit**(0.5)
# Create a noisy step current
noisy_step_current = get_noisy_step_current(t_start=50, t_end=500, unit_time = time_step,
amplitude= amplitude, sigma = sigma)
# Run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_step_current, \
simulation_time = 500*b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_step_current, title="min input", \
firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0]))
```
2 - How does the neuron behave? Discuss your result. Your answer should be max 3 lines long.
The current behaves randomly increasing or decreasing at each time step (like in a Markov process). The membrane voltage reacts to this input current by following the increasing or decreasing pattern. When the current is large enough, the membrane potential reaches the firing threshold and spikes are generated.
### 2.5.2 Subthreshold vs. superthreshold regime
#### [7 + 5 = 12 points]
1 - A time-dependent input current $I(t)$ is called subthreshold if it does not lead to spiking, i.e. if it leads to a membrane potential that stays - in the absence of noise - below the firing threshold. When noise is added, however, even subthreshold stimuli can induce spikes. Input stimuli that lead to spiking even in a noise-free neuron are called superthreshold. Sub- and superthreshold inputs, in the presence and absence of noise give rise to different spiking behaviour. These 4 different regimes (sub, super, noiseless, noisy) are what we will explore in this exercise.
Create a function that takes the amplitudes of a step current and the noise as arguments. It should simulate the LIF-model with this input, calculate the interspike intervals (ISI) and plot a histogram of the ISI (the interspike interval is the time interval between two consecutive spikes).
In order to do so edit the function test_effect_of_noise provided below. A few more details:
* Use the function spike_tools.get_spike_train_stats (http://neuronaldynamics-exercises.readthedocs.io/en/latest/_modules/neurodynex/tools/spike_tools.html#get_spike_train_stats) to get the ISI. Have a look at its source code to understand how to use it and what it returns. You may need to use other parts of the documentation as well.
* You will need to simulate the neuron model for long enough to get some statistics.
* Optional and recommended: What would you expect the resulting histograms to look like?
2 - Run your function and create the ISI histograms for the following four regimes:
* No noise, subthreshold: $I_0 = 1.9nA$, $\sigma = 0 nA* \sqrt{\text{your time unit}}$
* Noise, subthreshold regime: $I_0 = 1.9nA$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$
* No noise, superthreshold regime: $I_0 = 2.5nA$, $\sigma = 0 nA* \sqrt{\text{your time unit}}$
* Noise, superthreshold regime: $I_0 = 2.5nA$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$
```
from neurodynex.tools import spike_tools, plot_tools
# time unit. e.g.
time_unit = 1.*b2.ms
time_step = time_unit
def test_effect_of_noise(amplitude, sigma, bins = np.linspace(0,1,50)):
# Create a noisy step current
noisy_step_current = get_noisy_step_current(t_start=50, t_end=5000, unit_time = time_step,
amplitude= amplitude, sigma = sigma)
# Run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_step_current, \
simulation_time = 5000 * b2.ms)
plt.figure()
plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_step_current, title="", \
firing_threshold=LIF.FIRING_THRESHOLD)
plt.show()
print("nr of spikes: {}".format(spike_monitor.count[0]))
# Use the function spike_tools.get_spike_train_stats
spike_stats = spike_tools.get_spike_train_stats(spike_monitor)
# Make the ISI histogram
if len(spike_stats._all_ISI) != 0:
plt.hist(np.asarray(spike_stats._all_ISI), bins)
# choose an appropriate window size for the x-axis (ISI-axis)!
plt.xlabel("ISI [s]")
plt.ylabel("Number of spikes")
plt.show()
return spike_stats
# 1. No noise, subthreshold
stats1 = test_effect_of_noise(amplitude = 1.9 *b2.nA, sigma = 0*b2.nA*time_unit**(0.5))
# 2. Noise, subthreshold regime
stats2 = test_effect_of_noise(amplitude = 1.9*b2.nA, sigma = 1*b2.nA*time_unit**(0.5), bins = np.linspace(0, 0.1, 100))
# 3. No noise, superthreshold regime
stats3 = test_effect_of_noise(amplitude = 2.5*b2.nA, sigma = 0*b2.nA*time_unit**(0.5),bins = np.linspace(0, 0.1, 100))
# 4. Noise, superthreshold regime
stats4 = test_effect_of_noise(amplitude = 2.5*b2.nA, sigma = 1*b2.nA*time_unit**(0.5), bins = np.linspace(0, 0.1, 100))
```
2 - Discuss yout results (ISI histograms) for the four regimes regimes. For help and inspiration, as well as for verification of your results, have a look at the book chapter 8.3 (http://neuronaldynamics.epfl.ch/online/Ch8.S3.html). Your answer should be max 5 lines long.
#### [5 points]
\begin{itemize}
\item The first regime (no noise, subthreshold) does not generate any spikes. The histogram does not exist.
\item The second regime (noise, subthreshold) generates spikes, but less than the superthreshold regimes, as the voltage is generally under the firing threshold. The interspike interval is concentrated around 0.02 [s].
\item The third regime (no noise, superthreshold) generates regularly separated spikes. The histogram only has one column as the time between each spike is always the same (0.012 [s]).
\item The fourth regime (noise, superthreshold) generates as many spikes as the noiseless superthreshold regime. The interspike interval is concentrated around 0.012 [s] which is faster than the noisy subthreshold regime as the current is generally higher.
\end{itemize}
3 - For the ISI histograms you needed to simulate the neuron for a long time to gather enought statistics for the ISI. If you wanted to parallelize this procedure in order to reduce the computation time (e.g. you have multiple CPU cores on your machine), what would be a simple method to do that? Your answer should be max 3 lines long.
Hint: Temporal vs. ensemble average...
#### [2 points]
You can simulate the neuron separately on 4 cores to obtain 4 times more ISI than if running on only 1 core. You can then aggregate the data of the 4 cores to compute the ISI histograms.
### 2.5.3 Noisy sinusoidal input current
Implement the noisy sinusoidal input current $I(t) + I_{noise}$. As before, edit the function provided below; you only have to add the noisy part ot the current.
Then create a noisy sinusoidal current with amplitude = $2.5nA$, frequency = $100Hz$, $\sigma = 1 nA* \sqrt{\text{your time unit}}$ and direct_current = $1.5nA$, run the LIF model and plot the input current and the membrane potential, as you did in the previous exercises. What do you observe when compared to the noiseless case ($\sigma = 0 nA*\sqrt{\text{your time unit}}$)?
#### [5 points]
```
import math
def get_noisy_sinusoidal_current(t_start, t_end, unit_time,
amplitude, frequency, direct_current, sigma, phase_offset=0.,
append_zero=True):
"""Creates a noisy sinusoidal current. If t_start == t_end, then ALL entries are 0.
Args:
t_start (int): start of the sine wave
t_end (int): end of the sine wave
unit_time (Quantity, Time): unit of t_start and t_end. e.g. 0.1*brian2.ms
amplitude (Quantity, Current): maximum amplitude of the sinus e.g. 3.5*brian2.uamp
frequency (Quantity, Hz): Frequency of the sine. e.g. 0.5*brian2.kHz
direct_current(Quantity, Current): DC-component (=offset) of the current
sigma (float): amplitude (std) of the noise. e.g. 0.1*b2.uamp
phase_offset (float, Optional): phase at t_start. Default = 0.
append_zero (bool, optional): if true, 0Amp is appended at t_end+1. Without that
trailing 0, Brian reads out the last value in the array for all indices > t_end.
Returns:
TimedArray: Brian2.TimedArray
"""
assert isinstance(t_start, int), "t_start_ms must be of type int"
assert isinstance(t_end, int), "t_end must be of type int"
assert b2.units.fundamentalunits.have_same_dimensions(amplitude, b2.amp), \
"amplitude must have the dimension of current. e.g. brian2.uamp"
assert b2.units.fundamentalunits.have_same_dimensions(direct_current, b2.amp), \
"direct_current must have the dimension of current. e.g. brian2.uamp"
assert b2.units.fundamentalunits.have_same_dimensions(frequency, b2.Hz), \
"frequency must have the dimension of 1/Time. e.g. brian2.Hz"
tmp_size = 1 + t_end # +1 for t=0
if append_zero:
tmp_size += 1
tmp = np.zeros((tmp_size, 1)) * b2.amp
if t_end > t_start: # if deltaT is zero, we return a zero current
phi = range(0, (t_end - t_start) + 1)
phi = phi * unit_time * frequency
phi = phi * 2. * math.pi + phase_offset
c = np.sin(phi)
c = (direct_current + c * amplitude) # add direct current and scale by amplitude
tmp[t_start: t_end + 1, 0] = c # add sinusoidal part of current
for i in range(t_start, t_end) :
# Add noisy part of current here
# Pay attention to correct scaling with respect to the unit_time (time_step)
tmp[i] += sigma*(time_step**(-0.5))*np.random.randn()
curr = b2.TimedArray(tmp, dt= unit_time)
return curr
# ------------------
amplitude = 2.5 *b2.nA
frequency = 100 *b2.Hz
time_unit = 1.*b2.ms
time_step = 0.1*b2.ms # This is needed for higher temporal resolution
sigma = 1*b2.nA*time_unit**(0.5)
direct_current = 1.5 *b2.nA
# Create a noiseless sinusoidal current
noisy_sinusoidal_current = get_noisy_sinusoidal_current(200, 800, unit_time = time_step,
amplitude= amplitude, frequency=frequency,
direct_current=direct_current, sigma = 0 *b2.nA*time_unit**(0.5))
# Run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \
simulation_time = 100 * b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_sinusoidal_current, title="", \
firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0]))
# Create a noisy sinusoidal current
noisy_sinusoidal_current = get_noisy_sinusoidal_current(200, 800, unit_time = time_step,
amplitude= amplitude, frequency=frequency,
direct_current=direct_current, sigma = sigma)
# Run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \
simulation_time = 100 * b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(state_monitor, noisy_sinusoidal_current, title="", \
firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0]))
```
In the noiseles case, the voltage reaches the firing threshold but does not generate any spike. In the noisy case we observe some spikes when the firing threshold is exceeded around the maxima of the sinusoid.
### 2.5.4 Stochastic resonance (Bonus, not graded)
Contrary to what one may expect, some amount of noise under certain circumstances can improve the signal transmission properties of neurons. In the subthreshold regime, a neuron cannot transmit any information about the temporal structure of its input since it does not spike. With some noise, there is some probability to spike, with the probability depending on the time dependent input (inhomogeneous Poisson process). However to much noise covers the signal completely and thus, there is usually an optimal value for the amplitude of the noise. This phenomenon is called "stochastic resonance" and we will briefly touch upon it in this exercise. To get an idea of the effect we suggest reading section 9.4.2 in the book: http://neuronaldynamics.epfl.ch/online/Ch9.S4.html.
1 - Simulate several (e.g. n_inits = 5) trials of a LIF neuron with noisy sinusoidal current. For each trial calculate the power spectrum of the resulting spike train (using the function spike_tools.get_averaged_single_neuron_power_spectrum). Finally calculate the average power spectrum and plot it. With appropriate noise amplitudes, you should see a pronounced peak at the driving frequency, while without noise we don't see anything in the power spectrum since no spike was elicited in the subthreshold regime we are in.
In order to do that use the provided parameters and edit the code provided below. Complete the function _run_sim() which creates the input current, runs a simulation and computes the power spectrum. Call it in a loop to execute several trials. Then average over the spectra to obtain a smooth spectrum to plot.
```
amplitude = 1.*b2.nA
frequency = 20*b2.Hz
time_unit = 1.*b2.ms
time_step = .1*b2.ms
direct_current = 1. * b2.nA
sampling_frequency = .01/time_step
noise_amplitude = 2.
n_inits = 5
# run simulation and calculate power spectrum
def _run_sim(amplitude, noise_amplitude):
noisy_sinusoidal_current = get_noisy_sinusoidal_current(50, 100000, unit_time = time_step,
amplitude= ???, frequency= ???,
direct_current= ???,
sigma = noise_amplitude*b2.nA*np.sqrt(time_unit))
# run the LIF model
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=noisy_sinusoidal_current, \
simulation_time = 10000 * b2.ms)
# get power spectrum
freq, mean_ps, all_ps_dict, mean_firing_rate, mean_firing_freqs_per_neuron = \
spike_tools.get_averaged_single_neuron_power_spectrum(spike_monitor, sampling_frequency,
window_t_min = 1000*b2.ms, window_t_max = 9000*b2.ms,
nr_neurons_average=1, subtract_mean=True)
return freq, all_ps_dict, mean_firing_rate
# initialize array
spectra = []
# run a few simulations, calculate the power spectrum and append it to the spectra array
for i in ???:
freq, spectrum, mfr = ???
spectra.append(spectrum[0])
# average spectra over trials
spectrum = ??? # hint: us np.mean with axis=0
# plotting, frequencies vs the obtained spectrum:
plt.figure()
plt.plot(???,???)
plt.xlabel("???")
plt.ylabel("???")
```
2 - We now apply different noise levels to investigate the optimal noise level of stochastic resonance.
The quantity to optimize is the signal-to-noise ratio (SNR). Here, the SNR is defined as the intensity of the power spectrum at the driving frequency (the peak from above), divided by the value of the background noise (power spectrum averaged around the peak).
In order to do that edit the code provided below. You can re-use the function _run_sim() to obtain the power spectrum of on trial. The calculation of the SNR is already implemented and doesn't need to be changed.
When you are done with completing the code, run the simulation with the proposed parameters (This could take several minutes...). The result should be a plot showing an optimal noise amplitude, i.e. a $\sigma$ where the SNR is maximal.
```
def get_snr(amplitude, noise_amplitude, n_inits):
spectra = []
snr = 0.
for i in range(0,n_inits):
# run model with noisy sinusoidal
freq_signal, spectrum, mfr = ???
spectra.append(spectrum[0])
# Average over trials to get power spectrum
spectrum = ???
if mfr != 0.*b2.Hz:
peak = np.amax(spectrum)
index_of_peak = np.argmax(spectrum)
# snr: divide peak value by average of surrounding values
snr = peak/np.mean(np.concatenate((spectrum[index_of_peak-100:index_of_peak-1],\
spectrum[index_of_peak+1:index_of_peak+100])))
else:
snr = 0.
return snr
noise_amplitudes = np.arange(0.,5.,.5)
snr = np.zeros(len(noise_amplitudes))
for j in np.arange(0,len(noise_amplitudes)):
snr[j] = get_snr(amplitude, noise_amplitudes[j], n_inits = 8)
plt.figure()
plt.plot(noise_amplitudes,snr)
plt.xlabel("???")
plt.ylabel("???")
plt.show()
```
3 - For further reading on this topic, consult the book chapter 9.4.2 (http://neuronaldynamics.epfl.ch/online/Ch9.S4.html#Ch9.F10).
| github_jupyter |
# Azure ML Training Pipeline for COVID-CXR
This notebook defines an Azure machine learning pipeline for a single training run and submits the pipeline as an experiment to be run on an Azure virtual machine.
```
# Import statements
import azureml.core
from azureml.core import Experiment
from azureml.core import Workspace, Datastore
from azureml.data.data_reference import DataReference
from azureml.pipeline.core import PipelineData
from azureml.pipeline.core import Pipeline
from azureml.pipeline.steps import PythonScriptStep, EstimatorStep
from azureml.train.dnn import TensorFlow
from azureml.train.estimator import Estimator
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.environment import Environment
from azureml.core.runconfig import RunConfiguration
import shutil
```
### Register the workspace and configure its Python environment.
```
# Get reference to the workspace
ws = Workspace.from_config("./ws_config.json")
# Set workspace's environment
env = Environment.from_pip_requirements(name = "covid-cxr_env", file_path = "./../requirements.txt")
env.register(workspace=ws)
runconfig = RunConfiguration(conda_dependencies=env.python.conda_dependencies)
print(env.python.conda_dependencies.serialize_to_string())
# Move AML ignore file to root folder
aml_ignore_path = shutil.copy('./.amlignore', './../.amlignore')
```
### Create references to persistent and intermediate data
Create DataReference objects that point to our raw data on the blob. Configure a PipelineData object to point to preprocessed images stored on the blob.
```
# Get the blob datastore associated with this workspace
blob_store = Datastore(ws, name='covid_cxr_ds')
# Create data references to folders on the blob
raw_data_dr = DataReference(
datastore=blob_store,
data_reference_name="raw_data",
path_on_datastore="data/")
mila_data_dr = DataReference(
datastore=blob_store,
data_reference_name="mila_data",
path_on_datastore="data/covid-chestxray-dataset/")
fig1_data_dr = DataReference(
datastore=blob_store,
data_reference_name="fig1_data",
path_on_datastore="data/Figure1-COVID-chestxray-dataset/")
rsna_data_dr = DataReference(
datastore=blob_store,
data_reference_name="rsna_data",
path_on_datastore="data/rsna/")
training_logs_dr = DataReference(
datastore=blob_store,
data_reference_name="training_logs_data",
path_on_datastore="logs/training/")
models_dr = DataReference(
datastore=blob_store,
data_reference_name="models_data",
path_on_datastore="models/")
# Set up references to pipeline data (intermediate pipeline storage).
processed_pd = PipelineData(
"processed_data",
datastore=blob_store,
output_name="processed_data",
output_mode="mount")
```
### Compute Target
Specify and configure the compute target for this workspace. If a compute cluster by the name we specified does not exist, create a new compute cluster.
```
CT_NAME = "nd12s-clust-hp" # Name of our compute cluster
VM_SIZE = "STANDARD_ND12S" # Specify the Azure VM for execution of our pipeline
#CT_NAME = "d2-cluster" # Name of our compute cluster
#VM_SIZE = "STANDARD_D2" # Specify the Azure VM for execution of our pipeline
# Set up the compute target for this experiment
try:
compute_target = AmlCompute(ws, CT_NAME)
print("Found existing compute target.")
except ComputeTargetException:
print("Creating new compute target")
provisioning_config = AmlCompute.provisioning_configuration(vm_size=VM_SIZE, min_nodes=1, max_nodes=4)
compute_target = ComputeTarget.create(ws, CT_NAME, provisioning_config) # Create the compute cluster
# Wait for cluster to be provisioned
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
print("Azure Machine Learning Compute attached")
print("Compute targets: ", ws.compute_targets)
compute_target = ws.compute_targets[CT_NAME]
```
### Define pipeline and submit experiment.
Define the steps of an Azure machine learning pipeline. Create an Azure Experiment that will run our pipeline. Submit the experiment to the execution environment.
```
# Define preprocessing step the ML pipeline
step1 = PythonScriptStep(name="preprocess_step",
script_name="azure/preprocess_step/preprocess_step.py",
arguments=["--miladatadir", mila_data_dr, "--fig1datadir", fig1_data_dr,
"--rsnadatadir", rsna_data_dr, "--preprocesseddir", processed_pd],
inputs=[mila_data_dr, fig1_data_dr, rsna_data_dr],
outputs=[processed_pd],
compute_target=compute_target,
source_directory="./../",
runconfig=runconfig,
allow_reuse=True)
# Define training step in the ML pipeline
est = TensorFlow(source_directory='./../',
script_params=None,
compute_target=compute_target,
entry_script='azure/train_step/train_step.py',
pip_packages=['tensorboard', 'pandas', 'dill', 'numpy', 'imblearn', 'matplotlib', 'scikit-image', 'matplotlib',
'pydicom', 'opencv-python', 'tqdm', 'scikit-learn'],
use_gpu=True,
framework_version='2.0')
step2 = EstimatorStep(name="estimator_train_step",
estimator=est,
estimator_entry_script_arguments=["--rawdatadir", raw_data_dr, "--preprocesseddir", processed_pd,
"--traininglogsdir", training_logs_dr, "--modelsdir", models_dr],
runconfig_pipeline_params=None,
inputs=[raw_data_dr, processed_pd, training_logs_dr, models_dr],
outputs=[],
compute_target=compute_target)
# Construct the ML pipeline from the steps
steps = [step1, step2]
single_train_pipeline = Pipeline(workspace=ws, steps=steps)
single_train_pipeline.validate()
# Define a new experiment and submit a new pipeline run to the compute target.
experiment = Experiment(workspace=ws, name='SingleTrainExperiment_v3')
experiment.submit(single_train_pipeline, regenerate_outputs=False)
print("Pipeline is submitted for execution")
# Move AML ignore file back to original folder
aml_ignore_path = shutil.move(aml_ignore_path, './.amlignore')
```
| github_jupyter |
# SIT742: Modern Data Science
**(Week 01: Programming Python)**
---
- Materials in this module include resources collected from various open-source online repositories.
- You are free to use, change and distribute this package.
- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
Prepared by **SIT742 Teaching Team**
---
# Session 1A - IPython notebook and basic data types
In this session,
you will learn how to run *Python* code under **IPython notebook**. You have two options for the environment:
1. Install the [Anaconda](https://www.anaconda.com/distribution/), and run it locally; **OR**
1. Use one cloud data science platform such as:
- [Google Colab](https://colab.research.google.com): SIT742 lab session will use Google Colab.
- [IBM Cloud](https://www.ibm.com/cloud)
- [DataBricks](https://community.cloud.databricks.com)
In IPython notebook, you will be able to execute and modify your *Python* code more efficiently.
- **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1 of this Session 1A, and start with Part 2.**
In addition, you will be given an introduction on *Python*'s basic data types,
getting familiar with **string**, **number**, data conversion, data comparison and
data input/output.
Hopefully, by using **Python** and the powerful **IPython Notebook** environment,
you will find writing programs both fun and easy.
## Content
### Part 1 Create your own IPython notebook
1.1 [Start a notebook server](#cell_start)
1.2 [A tour of IPython notebook](#cell_tour)
1.3 [IPython notebook infterface](#cell_interface)
1.4 [Open and close notebooks](#cell_close)
### Part 2 Basic data types
2.1 [String](#cell_string)
2.2 [Number](#cell_number)
2.3 [Data conversion and comparison](#cell_conversion)
2.4 [Input and output](#cell_input)
# Part 1. Create your own IPython notebook
- **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1, and start with Part 2.**
This notebook will show you how to start an IPython notebook session. It guides you through the process of creating your own notebook. It provides you details on the notebook interface and show you how to nevigate with a notebook and manipulate its components.
<a id = "cell_start"></a>
## 1. 1 Start a notebook server
As described in Part 1, you start the IPython notebnook server by keying in the command in a terminal window/command line window.
However, before you do this, make sure you have created a folder **p01** under **H:/sit742**, download the file **SIT742P01A-Python.ipynb** notebook, and saved it under **H:/sit742/p01**.
If you are using [Google Colab](https://colab.research.google.com), you can upload this notebook to Google Colab and run it from there. If any difficulty, please ask your tutor, or check the CloudDeakin discussions.
After you complete this, you can now switch working directory to **H:/sit742**, and start the IPython notebook server by the following commands:
You can see the message in the terminal windows as follows:
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/start-workspace.jpg">
This will open a new browser window(or a new tab in your browser window). In the browser, there is an **dashboard** page which shows you all the folders and files under **sit742** folder
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/start-index.jpg">
<a id = "cell_tour"></a>
## 1.2 A tour of iPython notebook
### Create a new ipython notebook
To create a new notebook, go to the menu bar and select **File -> New Notebook -> Python 3**
By default, the new notebook is named **Untitled**. To give your notebook a meaningful name, click on the notebook name and rename it. We would like to call our new notebook **hello.ipynb**. Therefore, key in the name **hello**.
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/emptyNotebook.jpg">
### Run script in code cells
After a new notebook is created, there is an empty box in the notebook, called a **cell**. If you double click on the cell, you enter the **edit** mode of the notebook. Now we can enter the following code in the cell
After this, press **CTRL + ENTER**, and execute the cell. The result will be shown after the cell.
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/hello-world.jpg">
After a cell is executed , the notebook is switched to the **Commmand** mode. In this mode, you can manipulte the notebook and its commponent. Alternatively, you can use **ESC** key to switch from **Edit** mode to **Command** mode without executing code.
To modify the code you entered in the cell, **double click** the cell again and modify its content. For example, try to change the first line of previouse cell into the following code:
Afterwards, press **CTRL + ENTER**, and the new output is displayed.
As you can see, you are switching bewteen two modes, **Command** and **Edit**, when editing a notebook. We will in later section look into these two operation modes of closely. Now practise switching between the two modes until you are comfortable with them.
### Add new cells
To add a new cell to a notebook, you have to ensure the notebook is in **Command** mode. If not, refer to previous section to switch to **Command** mode.
To add cell below the currrent cell, go to menubar and click **Insert-> Insert Cell Below**. Alternatively, you can use shortcut i.e. pressing **b** (or **a** to create a cell above).
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/new-cell.jpg">
### Add markdown cells
By default, a code cell is created when adding a new cell. However, IPython notebook also use a **Markdown** cell for enter normal text. We use markdown cell to display the text in specific format and to provide structure for a notebook.
Try to copy the text in the cell below and paste it into your new notebook. Then from toolbar(**Cell->Cell Type**), change cell type from **Code** to **Markdown**.
Please note in the following cell, there is a space between the leading **-, #, 0** and the text that follows.
Now execute the cell by press **CTRL+ ENTER**. You notebook should look like this:
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/new-markdown.jpg">
Here is what the formated Markdown cell looks like:
### Exercise:
Click this cell, and practise writing markdown language here....
<a id = "cell_interface"></a>
### 1.3 IPython notebook interface
Now you have created your first notebook, let us have a close look at the user interface of IPython notebook.
### Notebook component
When you create a new notebook document, you will be presented with the notebook name, a menu bar, a toolbar and an empty code cell.
We can see the following components in a notebook:
- **Title bar** is at the top of the page and contains the name of the notebook. Clicking on the notebook name brings up a dialog which allows you to rename it. Please renaming your notebook name from “Untitled0” to “hello”. This change the file name from **Untitled0.ipynb** to **hello.ipynb**.
- **Menu bar** presents different options that can be used to manipulate the way the notebook functions.
- **Toolbar** gives a quick way of performing the most-used operations within the notebook.
- An empty computational cell is show in a new notebook where you can key in your code.
The notebook has two modes of operatiopn:
- **Edit**: In this mode, a single cell comes into focus and you can enter text or execute code. You activate the **Edit mode** by **clicking on a cell** or **selecting a cell and then pressing Enter key**.
- **Command**: In this mode, you can perform tasks that is related to the whole notebook structure. For example, you can move, copy, cut and paste cells. A series of keyboard shortcuts are also available to enable you to performa these tasks more effiencient. One easiest way of activating the command mode by pressing the **Esc** key to exit editing mode.
### Get help and interrupting
To get help on the use of different cammands, shortcuts, you can go to the **Help** menu, which provides links to relevant documentation.
It is also easy to get help on any objects(including functions and methods). For example, to access help on the sum() function, enter the followsing line in a cell:
The other improtant thing to know is how to interrupt a compuation. This can be done through the menu **Kernel->Interrupt** or **Kernel->Restart**, depending on what works on the situation. We will have chance to try this in later session.
### Notebook cell types
There are basically three types of cells in a IPython notebook: Code Cells, Markdown Cells, Raw Cells.
**Code cells** : Code cell can be used to enter code and will be executed by Python interpreter. Although we will not use other language in this unit, it is good to know that Jupyter Notebooks also support JavaScript, HTML, and Bash commands.
*** Markdown cells***: You have created markdown cell in the previouse section. Markdown Cells are the easiest way to write and format text. It is also give structure to the notebook. Markdown language is used in this type of cell. Follow this link https://daringfireball.net/projects/markdown/basics for the basics of the syntax.
This is a Markdown Cells example notebook sourced from : https://ipython.org/ipython-doc/3/notebook/notebook.html
This markdown cheat sheet can also be good reference to the main markdowns you might need to use in our pracs http://nestacms.com/docs/creating-content/markdown-cheat-sheet
**Raw cells** : Raw cells, unlike all other Jupyter Notebook cells, have no input-output distinction. This means that raw Cells cannot be rendered into anything other than what they already are. They are mainly used to create examples.
As you have seen, you can use the toolbar to choose between different cell types. In addition, shortcut **M** and **Y** can be used to quickly change a cell to Code cell or Markdown cell under Command mode.
### Operation modes of IPytho notebook
**Edit mode**
The Edit mode is used to enter text in cells and to execute code. As you have seen, after typing some code in the notebook and pressing **CTRL+Enter**, the notebook executes the cell and diplays output. The other two shortcuts used to run code in a cell are **Shift +Enter** and **Alt + Enter**.
These three ways to run the the code in a cells are summarized as follows:
- Pressing Shift + Enter: This runs the cell and select the next cell(A new cell is created if at the end of the notebook). This is the most usual way to execute a cell.
- Pressing Ctrl + Enter: This runs the cell and keep the same cell selected.
- Pressing Alt + Enter: This runs the cell and insert a new cell below it.
**Command mode**
In Command mode, you can edit the notebook as a whole, but not type into individual cells.
You can use keyboard shortcut in this mode to perform the notebook and cell actions effeciently. For example, if you are in command mode and press **c**, you will copy the current cell.
There are a large amount of shortcuts avaialbe in the command mode. However, you do not have to remember all of them, since most actions in the command mode are available in the menu.
Here is a list of the most useful shortcuts. They are arrganged by the
order we recommend you learn so that you can edit the cells effienctly.
1. Basic navigation:
- Enter: switch to Edit mode
- Esc: switch to Command mode
- Shift+enter: Eexecute a cell
- Up, down: Move to the cell above or below
2. Cell types:
- y: switch to code cell)
- m: switch to markdown cell)
3. Cell creation:
- a: insert new sell above
- b: insert new cell below
4. Cell deleting:
- press D twice.
Note that one of the most common (and frustrating) mistakes when using the
notebook is to type something in the wrong mode. Remember to use **Esc**
to switch to the Command mode and **Enter** to switch to the Edit mode.
Also, remember that **clicking** on a cell automatically places it in the Edit
mode, so it will be necessary to press **Esc** to go to the Command mode.
### Exercise
Please go ahead and try these shortcuts. For example, try to insert new cell, modify and delete an existing cell. You can also switch cells between code type and markdown type, and practics different kinds of formatting in a markdown cell.
For a complete list of shortcut in **Command** mode, go to menu bar **Help->Keyboardshorcut**. Feel free to explore the other shortcuts.
<a id = "cell_close"></a>
## 1.4 open and close notebooks
You can open multiple notebooks in a browser windows. Simply go to menubar and choose **File->open...**, and select one **.ipynb** file. The second notebook will be opened in a seperated tab.
Now make sure you still have your **hello.ipynb** open. Also please download **ControlAdvData.ipynb** from cloudDeakin, and save under **H:/sit742/prac01**. Now go to the manu bar, click on **File->open ...**, locate the file **ControlAdvData.ipynb**, and open this file.
When you finish your work, you will need to close your notebooks and shutdown the IPython notebook server. Instead of simply close all the tabs in the browser, you need to shutdown each notebook first. To do this, swich to the **Home** tab(**Dashboard page**) and **Running** section(see below). Click on **Shutdown** button to close each notebook. In case **Dashboard** page is not open, click on the **Jupyter** icon to reopen it.
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/close-index.jpg">
After each notebook is shutdown, it is time to showdown the IPython notebook server. To do this, go to the terminal window and press **CTRL + C**, and then enter **Y**. After the notebook server is shut down, the terminal window is ready for you to enter any new command.
<img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/close-terminal.jpg">
# Part 2 Basic Data Types
In this part, you will get better understanding with Python's basic data type. We will
look at **string** and **number** data type in this section. Also covered are:
- Data conversion
- Data comparison
- Receive input from users and display results effectively
You will be guided through completing a simple program which receives input from a user,
process the information, and display results with specific format.
<a id = "cell_string"></a>
## 2.1 String
A string is a *sequence of characters*. We are using strings in almost every Python
programs. As we can seen in the **”Hello, World!”** example, strings can be specified
using single quotes **'**. The **print()** function can be used to display a string.
```
print('Hello, World!')
```
We can also use a variable to store the string value, and use the variable in the
**print()** function.
```
# Assign a string to a variable
text = 'Hello, World!'
print(text)
```
A *variable* is basically a name that represents (or refers to) some value. We use **=**
to assign a value to a variable before we use it. Variable names are given by a programer
in a way that the program is easy to understanding. Variable names are *case sensitive*.
It can consist of letters, digits and underscores. However, it can not begin with a digit.
For example, **plan9** and **plan_9** are valid names, where **9plan** is not.
```
text = 'Hello, World!'
# with print() function, content is displayed without quotation mark
print(text)
```
With variables, we can also display its value without **print()** function. Note that
you can not display a variable without **print()** function in Python script(i.e. in a **.py** file). This method only works under interactive mode (i.e. in the notebook).
```
# without print() function, quotation mark is displayed together with content
text
```
Back to representation of string, there will be issues if you need to include a quotation
mark in the text.
```
text = ’What’ s your name ’
```
Since strings in double quotes **"** work exactly the same way as string in single quotes.
By mixing the two types, it is easy to include quaotation mark itself in the text.
```
text = "What' s your name?"
print(text)
```
Alternertively, you can use:
```
text = '"What is the problem?", he asked.'
print(text)
```
You can specify multi-line strings using triple quotes (**"""** or **'''**). In this way, single
quotes and double quotes can be used freely in the text.
Here is one example:
```
multiline = '''This is a test for multiline. This is the first line.
This is the second line.
I asked, "What's your name?"'''
print(multiline)
```
Notice the difference when the variable is displayed without **print()** function in this case.
```
multiline = '''This is a test for multiline. This is the first line.
This is the second line.
I asked, "What's your name?"'''
multiline
```
Another way of include the special characters, such as single quotes is with help of
escape sequences **\\**. For example, you can specify the single quote using **\\' ** as follows.
```
string = 'What\'s your name?'
print(string)
```
There are many more other escape sequences (See Section 2.4.1 in [Python3.0 official document](https://docs.python.org/3.1/reference/lexical_analysis.html)). But I am going to mention the most useful two examples here.
First, use escape sequences to indicate the backslash itself e.g. **\\\\**
```
path = 'c:\\windows\\temp'
print(path)
```
Second, used escape sequences to specify a two-line string. Apart from using a triple-quoted
string as shown previously, you can use **\n** to indicate the start of a new line.
```
multiline = 'This is a test for multiline. This is the first line.\nThis is the second line.'
print(multiline)
```
To manipulate strings, the following two operators are most useful:
* **+** is use to concatenate
two strings or string variables;
* ***** is used for concatenating several copies of the same
string.
```
print('Hello, ' + 'World' * 3)
```
Below is another example of string concatenation based on variables that store strings.
```
name = 'World'
greeting = 'Hello'
print(greeting + ', ' + name + '!')
```
Using variables, change part of the string text is very easy.
```
name
greeting
# Change part of the text is easy
greeting = 'Good morning'
print(greeting + ', ' + name + '!')
```
<a id = "cell_number"></a>
## 2.2 Number
There are two types of numbers that are used most frequently: integers and floats. As we
expect, the standard mathematic operation can be applied to these two types. Please
try the following expressions. Note that **\*\*** is exponent operator, which indicates
exponentation exponential(power) caluclation.
```
2 + 3
3 * 5
#3 to the power of 4
3 ** 4
```
Among the number operations, we need to look at division closely. In Python 3.0, classic division is performed using **/**.
```
15 / 5
14 / 5
```
*//* is used to perform floor division. It truncates the fraction and rounds it to the next smallest whole number toward the left on the number line.
```
14 // 5
# Negatives move left on number line. The result is -3 instead of -2
-14 // 5
```
Modulus operator **%** can be used to obtain remaider. Pay attention when negative number is involved.
```
14 % 5
# Hint: −14 // 5 equal to −3
# (-3) * 5 + ? = -14
-14 % 5
```
*Operator precedence* is a rule that affects how an expression is evaluated. As we learned in high school, the multiplication is done first than the addition. e.g. **2 + 3 * 4**. This means multiplication operator has higher precedence than the addition operator.
For your reference, a precedence table from the python reference manual is used to indicate the evaluation order in Python. For a complete precedence table, check the heading "Python Operators Precedence" in this [Python tutorial](http://www.tutorialspoint.com/python/python_basic_operators.htm)
However, When things get confused, it is far better to use parentheses **()** to explicitly
specify the precedence. This makes the program more readable.
Here are some examples on operator precedence:
```
2 + 3 * 4
(2 + 3) * 4
2 + 3 ** 2
(2 + 3) ** 2
-(4+3)+2
```
Similary as string, variables can be used to store a number so that it is easy to manipulate them.
```
x = 3
y = 2
x + 2
sum = x + y
sum
x * y
```
One common expression is to run a math operation on a variable and then assign the result of the operation back to the variable. Therefore, there is a shortcut for such a expression.
```
x = 2
x = x * 3
x
```
This is equivalant to:
```
x = 2
# Note there is no space between '*' and '+'
x *= 3
x
```
<a id = "cell_conversion"></a>
## 2.3 Data conversion and comparison
So far, we have seen three types of data: interger, float, and string. With various data type, Python can define the operations possible on them and the storage method for each of them. In the later pracs, we will further introduce more data types, such as tuple, list and dictionary.
To obtain the data type of a variable or a value, we can use built-in function **type()**;
whereas functions, such as **str()**, **int()**, **float()**, are used to convert data one type to another. Check the following examples on the usage of these functions:
```
type('Hello, world!)')
input_Value = '45.6'
type(input_Value)
weight = float(input_Value)
weight
type(weight)
```
Note the system will report error message when the conversion function is not compatible with the data.
```
input_Value = 'David'
weight = float(input_Value)
```
Comparison between two values can help make decision in a program. The result of the comparison is either **True** or **False**. They are the two values of *Boolean* type.
```
5 > 10
type(5 > 10)
# Double equal sign is also used for comparison
10.0 == 10
```
Check the following examples on comparison of two strings.
```
'cat' < 'dog'
# All uppercases are before low cases.
'cat' < 'Dog'
'apple' < 'apricot'
```
There are three logical operators, *not*, *and* and *or*, which can be applied to the boolean values.
```
# Both condition #1 and condition #2 are True?
3 < 4 and 7 < 8
# Either condition 1 or condition 2 are True?
3 < 4 or 7 > 8
# Both conditional #1 and conditional #2 are False?
not ((3 > 4) or (7 > 8))
```
<a id = "cell_input"></a>
## 2. 4. Input and output
All programing languages provide features to interact with user. Python provide *input()* function to get input. It waits for the user to type some input and press return. We can add some information for the user by putting a message inside the function's brackets. It must be a string or a string variable. The text that was typed can be saved in a variable. Here is one example:
```
nInput = input('Enter you number here:\n')
```
However, be aware that the input received from the user are treated as a string, even
though a user entered a number. The following **print()** function invokes an error message.
```
print(nInput + 3)
```
The input need to be converted to an integer before the match operation can be performed as follows:
```
print(int(nInput) + 3)
```
After user's input are accepted, the messages need to be displayed to the user accordingly. String concatenation is one way to display messages which incorporate variable values.
```
name = 'David'
print('Hello, ' + name)
```
Another way of achieving this is using **print()** funciton with *string formatting*. We need to use the *string formatting operator*, the percent(**%**) sign.
```
name = 'David'
print('Hello, %s' % name)
```
Here is another example with two variables:
```
name = 'David'
age = 23
print('%s is %d years old.' % (name, age))
```
Notice that the two variables, **name**, **age**, that specify the values are included at the end of the statement, and enclosed with a bracket.
With the quotation mark, **%s** and **%d** are used to specify formating for string and integer respectively.
The following table shows a selected set of symbols which can be used along with %.
<table width="304" border="1">
<tr>
<th width="112" scope="col">Format symbol</th>
<th width="176" scope="col">Conversion</th>
</tr>
<tr>
<td>%s</td>
<td>String</td>
</tr>
<tr>
<td>%d</td>
<td>Signed decimal integer</td>
</tr>
<tr>
<td>%f</td>
<td>Floating point real number</td>
</tr>
</table>
There are extra charaters that are used together with above symbols:
<table width="400" border="1">
<tr>
<th width="100" scope="col">Symbol</th>
<th width="3000" scope="col">Functionality</th>
</tr>
<tr>
<td>-</td>
<td>Left justification</td>
</tr>
<tr>
<td>+</td>
<td>Display the sign</td>
</tr>
<tr>
<td>m.n</td>
<td>m is the minimum total width; n is the number of digits to display after the decimal point</td>
</tr>
</table>
Here are more examples that use above specifiers:
```
# With %f, the format is right justification by default.
# As a result, white spaces are added to the left of the number
# 10.4 means minimal width 10 with 4 decinal points
print('Output a float number: %10.4f' % (3.5))
# plus sign after % means to show positive sign
# Zero after plus sign means using leading zero to fill width of 5
print('Output an integer: %+05d' % (23))
```
### 2.5 Notes on *Python 2*
You need to pay attention if you test examples in this prac under *Python* 2.
1. In *Python 3, * **/** is float division, and **//** is integer division; while in Python 2,
both **/** and **//**
perform *integer division*.
However, if you stick to using **float(3)/2** for *float division*,
and **3/2** for *integer division*,
you will have no problem in both version.
2. Instead using function **input()**,
**raw_input()** is used in Python 2.
Both functions have the same functionality,
i.e. take what the user typed and passes it back as a string.
3. Although both versions support **print()** function with same format,
Python 2 also allows the print statement (e.g. **print "Hello, World!"**),
which is not valid in Python 3.
However, if you stick to our examples and using **print()** function with parantheses,
your programs should works fine in both versions.
| github_jupyter |
# General Equilibrium
This notebook illustrates **how to solve GE equilibrium models**. The example is a simple one-asset model without nominal rigidities.
The notebook shows how to:
1. Solve for the **stationary equilibrium**.
2. Solve for (non-linear) **transition paths** using a relaxtion algorithm.
3. Solve for **transition paths** (linear vs. non-linear) and **impulse-responses** using the **sequence-space method** of **Auclert et. al. (2020)**.
```
LOAD = False # load stationary equilibrium
DO_VARY_SIGMA_E = True # effect of uncertainty on stationary equilibrium
DO_TP_RELAX = True # do transition path with relaxtion
```
# Setup
```
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import numba as nb
from scipy import optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
from consav.misc import elapsed
from GEModel import GEModelClass
from GEModel import solve_backwards, simulate_forwards, simulate_forwards_transpose
```
## Choose number of threads in numba
```
import numba as nb
nb.set_num_threads(8)
```
# Model
```
model = GEModelClass('baseline',load=LOAD)
print(model)
```
For easy access
```
par = model.par
sim = model.sim
sol = model.sol
```
**Productivity states:**
```
for e,pr_e in zip(par.e_grid,par.e_ergodic):
print(f'Pr[e = {e:7.4f}] = {pr_e:.4f}')
assert np.isclose(np.sum(par.e_grid*par.e_ergodic),1.0)
```
# Find Stationary Equilibrium
**Step 1:** Find demand and supply of capital for a grid of interest rates.
```
if not LOAD:
t0 = time.time()
par = model.par
# a. interest rate trial values
Nr = 20
r_vec = np.linspace(0.005,1.0/par.beta-1-0.002,Nr) # 1+r > beta not possible
# b. allocate
Ks = np.zeros(Nr)
Kd = np.zeros(Nr)
# c. loop
r_min = r_vec[0]
r_max = r_vec[Nr-1]
for i_r in range(Nr):
# i. firm side
k = model.firm_demand(r_vec[i_r],par.Z)
Kd[i_r] = k*1 # aggregate labor = 1.0
# ii. household side
success = model.solve_household_ss(r=r_vec[i_r])
if success:
success = model.simulate_household_ss()
if success:
# total demand
Ks[i_r] = np.sum(model.sim.D*model.sol.a)
# bounds on r
diff = Ks[i_r]-Kd[i_r]
if diff < 0: r_min = np.fmax(r_min,r_vec[i_r])
if diff > 0: r_max = np.fmin(r_max,r_vec[i_r])
else:
Ks[i_r] = np.nan
# d. save
model.save()
print(f'grid search done in {elapsed(t0)}')
```
**Step 2:** Plot supply and demand.
```
if not LOAD:
par = model.par
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(1,1,1)
ax.plot(r_vec,Ks,label='supply of capital')
ax.plot(r_vec,Kd,label='demand for capital')
ax.axvline(r_min,lw=0.5,ls='--',color='black')
ax.axvline(r_max,lw=0.5,ls='--',color='black')
ax.legend(frameon=True)
ax.set_xlabel('interest rate, $r$')
ax.set_ylabel('capital, $K_t$')
fig.tight_layout()
fig.savefig('figs/stationary_equilibrium.pdf')
```
**Step 3:** Solve root-finding problem.
```
def obj(r,model):
model.solve_household_ss(r=r)
model.simulate_household_ss()
return np.sum(model.sim.D*model.sol.a)-model.firm_demand(r,model.par.Z)
if not LOAD:
t0 = time.time()
opt = optimize.root_scalar(obj,bracket=[r_min,r_max],method='bisect',args=(model,))
model.par.r_ss = opt.root
assert opt.converged
print(f'search done in {elapsed(t0)}')
```
**Step 4:** Check market clearing conditions.
```
model.steady_state()
```
## Timings
```
%timeit model.solve_household_ss(r=par.r_ss)
%timeit model.simulate_household_ss()
```
## Income uncertainty and the equilibrium interest rate
The equlibrium interest rate decreases when income uncertainty is increased.
```
if DO_VARY_SIGMA_E:
par = model.par
# a. seetings
sigma_e_vec = [0.20]
# b. find equilibrium rates
model_ = model.copy()
for sigma_e in sigma_e_vec:
# i. set new parameter
model_.par.sigma_e = sigma_e
model_.create_grids()
# ii. solve
print(f'sigma_e = {sigma_e:.4f}',end='')
opt = optimize.root_scalar(
obj,
bracket=[0.00,model.par.r_ss],
method='bisect',
args=(model_,)
)
print(f' -> r_ss = {opt.root:.4f}')
model_.par.r_ss = opt.root
model_.steady_state()
print('\n')
```
## Test matrix formulation
**Step 1:** Construct $\boldsymbol{Q}_{ss}$
```
# a. allocate Q
Q = np.zeros((par.Ne*par.Na,par.Ne*par.Na))
# b. fill
for i_e in range(par.Ne):
# get view of current block
q = Q[i_e*par.Na:(i_e+1)*par.Na,i_e*par.Na:(i_e+1)*par.Na]
for i_a in range(par.Na):
# i. optimal choice
a_opt = sol.a[i_e,i_a]
# ii. above -> all weight on last node
if a_opt >= par.a_grid[-1]:
q[i_a,-1] = 1.0
# iii. below -> all weight on first node
elif a_opt <= par.a_grid[0]:
q[i_a,0] = 1.0
# iv. standard -> distribute weights on neighboring nodes
else:
i_a_low = np.searchsorted(par.a_grid,a_opt,side='right')-1
assert a_opt >= par.a_grid[i_a_low], f'{a_opt} < {par.a_grid[i_a_low]}'
assert a_opt < par.a_grid[i_a_low+1], f'{a_opt} < {par.a_grid[i_a_low]}'
q[i_a,i_a_low] = (par.a_grid[i_a_low+1]-a_opt)/(par.a_grid[i_a_low+1]-par.a_grid[i_a_low])
q[i_a,i_a_low+1] = 1-q[i_a,i_a_low]
```
**Step 2:** Construct $\tilde{\Pi}^e=\Pi^e \otimes \boldsymbol{I}_{\#_{a}\times\#_{a}}$
```
Pit = np.kron(par.e_trans,np.identity(par.Na))
```
**Step 3:** Test $\overrightarrow{D}_{t+1}=\tilde{\Pi}^{e\prime}\boldsymbol{Q}_{ss}^{\prime}\overrightarrow{D}_{t}$
```
D = np.zeros(sim.D.shape)
D[:,0] = par.e_ergodic
# a. standard
D_plus = np.zeros(D.shape)
simulate_forwards(D,sol.i,sol.w,par.e_trans.T.copy(),D_plus)
# b. matrix product
D_plus_alt = (([email protected])@D.ravel()).reshape((par.Ne,par.Na))
# c. test equality
assert np.allclose(D_plus,D_plus_alt)
```
# Find transition path
**MIT-shock:** Transtion path for arbitrary exogenous path of $Z_t$ starting from the stationary equilibrium, i.e. $D_{-1} = D_{ss}$ and in particular $K_{-1} = K_{ss}$.
**Step 1:** Construct $\{Z_t\}_{t=0}^{T-1}$ where $Z_t = (1-\rho_Z)Z_{ss} + \rho_Z Z_t$ and $Z_0 = (1+\sigma_Z) Z_{ss}$
```
path_Z = model.get_path_Z()
```
**Step 2:** Apply relaxation algorithm.
```
if DO_TP_RELAX:
t0 = time.time()
# a. allocate
path_r = np.repeat(model.par.r_ss,par.path_T) # use steady state as initial guess
path_r_ = np.zeros(par.path_T)
path_w = np.zeros(par.path_T)
# b. setting
nu = 0.90 # relaxation parameter
max_iter = 5000 # maximum number of iterations
# c. iterate
it = 0
while True:
# i. find wage
for t in range(par.path_T):
path_w[t] = model.implied_w(path_r[t],path_Z[t])
# ii. solve and simulate
model.solve_household_path(path_r,path_w)
model.simulate_household_path(model.sim.D)
# iii. implied prices
for t in range(par.path_T):
path_r_[t] = model.implied_r(sim.path_Klag[t],path_Z[t])
# iv. difference
max_abs_diff = np.max(np.abs(path_r-path_r_))
if it%10 == 0: print(f'{it:4d}: {max_abs_diff:.8f}')
if max_abs_diff < 1e-8: break
# v. update
path_r = nu*path_r + (1-nu)*path_r_
# vi. increment
it += 1
if it > max_iter: raise Exception('too many iterations')
print(f'\n transtion path found in {elapsed(t0)}')
```
**Plot transition-paths:**
```
if DO_TP_RELAX:
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(2,2,1)
ax.plot(np.arange(par.path_T),path_Z,'-o',ms=2)
ax.set_title('technology, $Z_t$');
ax = fig.add_subplot(2,2,2)
ax.plot(np.arange(par.path_T),sim.path_K,'-o',ms=2)
ax.set_title('capital, $k_t$');
ax = fig.add_subplot(2,2,3)
ax.plot(np.arange(par.path_T),path_r,'-o',ms=2)
ax.set_title('interest rate, $r_t$');
ax = fig.add_subplot(2,2,4)
ax.plot(np.arange(par.path_T),path_w,'-o',ms=2)
ax.set_title('wage, $w_t$')
fig.tight_layout()
fig.savefig('figs/transition_path.pdf')
```
**Remember:**
```
if DO_TP_RELAX:
path_Z_relax = path_Z
path_K_relax = sim.path_K
path_r_relax = path_r
path_w_relax = path_w
```
# Find impulse-responses using sequence-space method
**Paper:** Auclert, A., Bardóczy, B., Rognlie, M., and Straub, L. (2020). *Using the Sequence-Space Jacobian to Solve and Estimate Heterogeneous-Agent Models*.
**Original code:** [shade-econ](https://github.com/shade-econ/sequence-jacobian/#sequence-space-jacobian)
**This code:** Illustrates the sequence-space method. The original paper shows how to do it computationally efficient and for a general class of models.
**Step 1:** Compute the Jacobian for the household block around the stationary equilibrium
```
def jac(model,price,dprice=1e-4,do_print=True):
t0_all = time.time()
if do_print: print(f'price is {price}')
par = model.par
sol = model.sol
sim = model.sim
# a. step 1: solve backwards
t0 = time.time()
path_r = np.repeat(par.r_ss,par.path_T)
path_w = np.repeat(par.w_ss,par.path_T)
if price == 'r': path_r[-1] += dprice
elif price == 'w': path_w[-1] += dprice
model.solve_household_path(path_r,path_w,do_print=False)
if do_print: print(f'solved backwards in {elapsed(t0)}')
# b. step 2: derivatives
t0 = time.time()
diff_Ds = np.zeros((par.path_T,*sim.D.shape))
diff_as = np.zeros(par.path_T)
diff_cs = np.zeros(par.path_T)
for s in range(par.path_T):
t_ =(par.path_T-1)-s
simulate_forwards(sim.D,sol.path_i[t_],sol.path_w[t_],par.e_trans.T,diff_Ds[s])
diff_Ds[s] = (diff_Ds[s]-sim.D)/dprice
diff_as[s] = (np.sum(sol.path_a[t_]*sim.D)-np.sum(sol.a*sim.D))/dprice
diff_cs[s] = (np.sum(sol.path_c[t_]*sim.D)-np.sum(sol.c*sim.D))/dprice
if do_print: print(f'derivatives calculated in {elapsed(t0)}')
# c. step 3: expectation factors
t0 = time.time()
# demeaning improves numerical stability
def demean(x):
return x - x.sum()/x.size
exp_as = np.zeros((par.path_T-1,*sol.a.shape))
exp_as[0] = demean(sol.a)
exp_cs = np.zeros((par.path_T-1,*sol.c.shape))
exp_cs[0] = demean(sol.c)
for t in range(1,par.path_T-1):
simulate_forwards_transpose(exp_as[t-1],sol.i,sol.w,par.e_trans,exp_as[t])
exp_as[t] = demean(exp_as[t])
simulate_forwards_transpose(exp_cs[t-1],sol.i,sol.w,par.e_trans,exp_cs[t])
exp_cs[t] = demean(exp_cs[t])
if do_print: print(f'expecation factors calculated in {elapsed(t0)}')
# d. step 4: F
t0 = time.time()
Fa = np.zeros((par.path_T,par.path_T))
Fa[0,:] = diff_as
Fc = np.zeros((par.path_T,par.path_T))
Fc[0,:] = diff_cs
Fa[1:, :] = exp_as.reshape((par.path_T-1, -1)) @ diff_Ds.reshape((par.path_T, -1)).T
Fc[1:, :] = exp_cs.reshape((par.path_T-1, -1)) @ diff_Ds.reshape((par.path_T, -1)).T
if do_print: print(f'f calculated in {elapsed(t0)}')
t0 = time.time()
# e. step 5: J
Ja = Fa.copy()
for t in range(1, Ja.shape[1]): Ja[1:, t] += Ja[:-1, t - 1]
Jc = Fc.copy()
for t in range(1, Jc.shape[1]): Jc[1:, t] += Jc[:-1, t - 1]
if do_print: print(f'J calculated in {elapsed(t0)}')
# f. save
setattr(model.sol,f'jac_curlyK_{price}',Ja)
setattr(model.sol,f'jac_C_{price}',Jc)
if do_print: print(f'full Jacobian calculated in {elapsed(t0_all)}\n')
jac(model,'r')
jac(model,'w')
```
**Inspect Jacobians:**
```
fig = plt.figure(figsize=(12,8))
T_fig = 200
# curlyK_r
ax = fig.add_subplot(2,2,1)
for s in [0,25,50,75,100]:
ax.plot(np.arange(T_fig),sol.jac_curlyK_r[s,:T_fig],'-o',ms=2,label=f'$s={s}$')
ax.legend(frameon=True)
ax.set_title(r'$\mathcal{J}^{\mathcal{K},r}$')
ax.set_xlim([0,T_fig])
# curlyK_w
ax = fig.add_subplot(2,2,2)
for s in [0,25,50,75,100]:
ax.plot(np.arange(T_fig),sol.jac_curlyK_w[s,:T_fig],'-o',ms=2)
ax.set_title(r'$\mathcal{J}^{\mathcal{K},w}$')
ax.set_xlim([0,T_fig])
# C_r
ax = fig.add_subplot(2,2,3)
for s in [0,25,50,75,100]:
ax.plot(np.arange(T_fig),sol.jac_C_r[s,:T_fig],'-o',ms=2,label=f'$s={s}$')
ax.legend(frameon=True)
ax.set_title(r'$\mathcal{J}^{C,r}$')
ax.set_xlim([0,T_fig])
# curlyK_w
ax = fig.add_subplot(2,2,4)
for s in [0,25,50,75,100]:
ax.plot(np.arange(T_fig),sol.jac_C_w[s,:T_fig],'-o',ms=2)
ax.set_title(r'$\mathcal{J}^{C,w}$')
ax.set_xlim([0,T_fig])
fig.tight_layout()
fig.savefig('figs/jacobians.pdf')
```
**Step 2:** Compute the Jacobians for the firm block around the stationary equilibrium (analytical).
```
sol.jac_r_K[:] = 0
sol.jac_w_K[:] = 0
sol.jac_r_Z[:] = 0
sol.jac_w_Z[:] = 0
for s in range(par.path_T):
for t in range(par.path_T):
if t == s+1:
sol.jac_r_K[t,s] = par.alpha*(par.alpha-1)*par.Z*par.K_ss**(par.alpha-2)
sol.jac_w_K[t,s] = (1-par.alpha)*par.alpha*par.Z*par.K_ss**(par.alpha-1)
if t == s:
sol.jac_r_Z[t,s] = par.alpha*par.Z*par.K_ss**(par.alpha-1)
sol.jac_w_Z[t,s] = (1-par.alpha)*par.Z*par.K_ss**par.alpha
```
**Step 3:** Use the chain rule and solve for $G$.
```
H_K = sol.jac_curlyK_r @ sol.jac_r_K + sol.jac_curlyK_w @ sol.jac_w_K - np.eye(par.path_T)
H_Z = sol.jac_curlyK_r @ sol.jac_r_Z + sol.jac_curlyK_w @ sol.jac_w_Z
G_K_Z = -np.linalg.solve(H_K, H_Z) # H_K^(-1)H_Z
```
**Step 4:** Find effect on prices and other outcomes than $K$.
```
G_r_Z = sol.jac_r_Z + sol.jac_r_K@G_K_Z
G_w_Z = sol.jac_w_Z + sol.jac_w_K@G_K_Z
G_C_Z = sol.jac_C_r@G_r_Z + sol.jac_C_w@G_w_Z
```
**Step 5:** Plot impulse-responses.
**Example I:** News shock (i.e. in a single period) vs. persistent shock where $ dZ_t = \rho dZ_{t-1} $ and $dZ_0$ is the initial shock.
```
fig = plt.figure(figsize=(12,4))
T_fig = 50
# left: news shock
ax = fig.add_subplot(1,2,1)
for s in [5,10,15,20,25]:
dZ = (1+par.Z_sigma)*par.Z*(np.arange(par.path_T) == s)
dK = G_K_Z@dZ
ax.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2,label=f'$s={s}$')
ax.legend(frameon=True)
ax.set_title(r'1% TFP news shock in period $s$')
ax.set_ylabel('$K_t-K_{ss}$')
ax.set_xlim([0,T_fig])
# right: persistent shock
ax = fig.add_subplot(1,2,2)
dZ = model.get_path_Z()-par.Z
dK = G_K_Z@dZ
ax.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2)
ax.set_title(r'1% TFP shock with persistence $\rho=0.90$')
ax.set_ylabel('$K_t-K_{ss}$')
ax.set_xlim([0,T_fig])
fig.tight_layout()
fig.savefig('figs/news_vs_persistent_shock.pdf')
```
**Example II:** Further effects of persistent shock.
```
fig = plt.figure(figsize=(12,8))
T_fig = 50
ax_K = fig.add_subplot(2,2,1)
ax_r = fig.add_subplot(2,2,2)
ax_w = fig.add_subplot(2,2,3)
ax_C = fig.add_subplot(2,2,4)
ax_K.set_title('$K_t-K_{ss}$ after 1% TFP shock')
ax_K.set_xlim([0,T_fig])
ax_r.set_title('$r_t-r_{ss}$ after 1% TFP shock')
ax_r.set_xlim([0,T_fig])
ax_w.set_title('$w_t-w_{ss}$ after 1% TFP shock')
ax_w.set_xlim([0,T_fig])
ax_C.set_title('$C_t-C_{ss}$ after 1% TFP shock')
ax_C.set_xlim([0,T_fig])
dZ = model.get_path_Z()-par.Z
dK = G_K_Z@dZ
ax_K.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2)
dr = G_r_Z@dZ
ax_r.plot(np.arange(T_fig),dr[:T_fig],'-o',ms=2)
dw = G_w_Z@dZ
ax_w.plot(np.arange(T_fig),dw[:T_fig],'-o',ms=2)
dC = G_C_Z@dZ
ax_C.plot(np.arange(T_fig),dC[:T_fig],'-o',ms=2)
fig.tight_layout()
fig.savefig('figs/irfs.pdf')
```
## Non-linear transition path
Use the Jacobian to speed-up solving for the non-linear transition path using a quasi-Newton method.
**1. Solver**
```
def broyden_solver(f,x0,jac,tol=1e-8,max_iter=100,backtrack_fac=0.5,max_backtrack=30,do_print=False):
""" numerical solver using the broyden method """
# a. initial
x = x0.ravel()
y = f(x)
# b. iterate
for it in range(max_iter):
# i. current difference
abs_diff = np.max(np.abs(y))
if do_print: print(f' it = {it:3d} -> max. abs. error = {abs_diff:12.8f}')
if abs_diff < tol: return x
# ii. new x
dx = np.linalg.solve(jac,-y)
# iii. evalute with backtrack
for _ in range(max_backtrack):
try: # evaluate
ynew = f(x+dx)
except ValueError: # backtrack
dx *= backtrack_fac
else: # update jac and break from backtracking
dy = ynew-y
jac = jac + np.outer(((dy - jac @ dx) / np.linalg.norm(dx) ** 2), dx)
y = ynew
x += dx
break
else:
raise ValueError('too many backtracks, maybe bad initial guess?')
else:
raise ValueError(f'no convergence after {max_iter} iterations')
```
**2. Target function**
$$\boldsymbol{H}(\boldsymbol{K},\boldsymbol{Z},D_{ss}) = \mathcal{K}_{t}(\{r(Z_{s},K_{s-1}),w(Z_{s},K_{s-1})\}_{s\geq0},D_{ss})-K_{t}=0$$
```
def target(path_K,path_Z,model,D0,full_output=False):
par = model.par
sim = model.sim
path_r = np.zeros(path_K.size)
path_w = np.zeros(path_K.size)
# a. implied prices
K0lag = np.sum(par.a_grid[np.newaxis,:]*D0)
path_Klag = np.insert(path_K,0,K0lag)
for t in range(par.path_T):
path_r[t] = model.implied_r(path_Klag[t],path_Z[t])
path_w[t] = model.implied_w(path_r[t],path_Z[t])
# b. solve and simulate
model.solve_household_path(path_r,path_w)
model.simulate_household_path(D0)
# c. market clearing
if full_output:
return path_r,path_w
else:
return sim.path_K-path_K
```
**3. Solve**
```
path_Z = model.get_path_Z()
f = lambda x: target(x,path_Z,model,sim.D)
t0 = time.time()
path_K = broyden_solver(f,x0=np.repeat(par.K_ss,par.path_T),jac=H_K,do_print=True)
path_r,path_w = target(path_K,path_Z,model,sim.D,full_output=True)
print(f'\nIRF found in {elapsed(t0)}')
```
**4. Plot**
```
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
ax.set_title('capital, $K_t$')
dK = G_K_Z@(path_Z-par.Z)
ax.plot(np.arange(T_fig),dK[:T_fig] + par.K_ss,'-o',ms=2,label=f'linear')
ax.plot(np.arange(T_fig),path_K[:T_fig],'-o',ms=2,label=f'non-linear')
if DO_TP_RELAX:
ax.plot(np.arange(T_fig),path_K_relax[:T_fig],'--o',ms=2,label=f'non-linear (relaxtion)')
ax.legend(frameon=True)
ax = fig.add_subplot(1,2,2)
ax.set_title('interest rate, $r_t$')
dr = G_r_Z@(path_Z-par.Z)
ax.plot(np.arange(T_fig),dr[:T_fig] + par.r_ss,'-o',ms=2,label=f'linear')
ax.plot(np.arange(T_fig),path_r[:T_fig],'-o',ms=2,label=f'non-linear')
if DO_TP_RELAX:
ax.plot(np.arange(T_fig),path_r_relax[:T_fig],'--o',ms=2,label=f'non-linear (relaxtion)')
fig.tight_layout()
fig.savefig('figs/non_linear.pdf')
```
## Covariances
Assume that $Z_t$ is stochastic and follows
$$ d\tilde{Z}_t = \rho d\tilde{Z}_{t-1} + \sigma\epsilon_t,\,\,\, \epsilon_t \sim \mathcal{N}(0,1) $$
The covariances between all outcomes can be calculated as follows.
```
# a. choose parameter
rho = 0.90
sigma = 0.10
# b. find change in outputs
dZ = rho**(np.arange(par.path_T))
dC = G_C_Z@dZ
dK = G_K_Z@dZ
# c. covariance of consumption
print('auto-covariance of consumption:\n')
for k in range(5):
if k == 0:
autocov_C = sigma**2*np.sum(dC*dC)
else:
autocov_C = sigma**2*np.sum(dC[:-k]*dC[k:])
print(f' k = {k}: {autocov_C:.4f}')
# d. covariance of consumption and capital
cov_C_K = sigma**2*np.sum(dC*dK)
print(f'\ncovariance of consumption and capital: {cov_C_K:.4f}')
```
# Extra: No idiosyncratic uncertainty
This section solve for the transition path in the case without idiosyncratic uncertainty.
**Analytical solution for steady state:**
```
r_ss_pf = (1/par.beta-1) # from euler-equation
w_ss_pf = model.implied_w(r_ss_pf,par.Z)
K_ss_pf = model.firm_demand(r_ss_pf,par.Z)
Y_ss_pf = model.firm_production(K_ss_pf,par.Z)
C_ss_pf = Y_ss_pf-par.delta*K_ss_pf
print(f'r: {r_ss_pf:.6f}')
print(f'w: {w_ss_pf:.6f}')
print(f'Y: {Y_ss_pf:.6f}')
print(f'C: {C_ss_pf:.6f}')
print(f'K/Y: {K_ss_pf/Y_ss_pf:.6f}')
```
**Function for finding consumption and capital paths given paths of interest rates and wages:**
It can be shown that
$$ C_{0}=\frac{(1+r_{0})a_{-1}+\sum_{t=0}^{\infty}\frac{1}{\mathcal{R}_{t}}w_{t}}{\sum_{t=0}^{\infty}\beta^{t/\rho}\mathcal{R}_{t}^{\frac{1-\rho}{\rho}}} $$
where
$$ \mathcal{R}_{t} =\begin{cases} 1 & \text{if }t=0\\ (1+r_{t})\mathcal{R}_{t-1} & \text{else} \end{cases} $$
Otherwise the **Euler-equation** holds
$$ C_t = (\beta (1+r_{t}))^{\frac{1}{\sigma}}C_{t-1} $$
```
def path_CK_func(K0,path_r,path_w,r_ss,w_ss,model):
par = model.par
# a. initialize
wealth = (1+path_r[0])*K0
inv_MPC = 0
# b. solve
RT = 1
max_iter = 5000
t = 0
while True and t < max_iter:
# i. prices padded with steady state
r = path_r[t] if t < par.path_T else r_ss
w = path_w[t] if t < par.path_T else w_ss
# ii. interest rate factor
if t == 0:
fac = 1
else:
fac *= (1+r)
# iii. accumulate
add_wealth = w/fac
add_inv_MPC = par.beta**(t/par.sigma)*fac**((1-par.sigma)/par.sigma)
if np.fmax(add_wealth,add_inv_MPC) < 1e-12:
break
else:
wealth += add_wealth
inv_MPC += add_inv_MPC
# iv. increment
t += 1
# b. simulate
path_C = np.empty(par.path_T)
path_K = np.empty(par.path_T)
for t in range(par.path_T):
if t == 0:
path_C[t] = wealth/inv_MPC
K_lag = K0
else:
path_C[t] = (par.beta*(1+path_r[t]))**(1/par.sigma)*path_C[t-1]
K_lag = path_K[t-1]
path_K[t] = (1+path_r[t])*K_lag + path_w[t] - path_C[t]
return path_K,path_C
```
**Test with steady state prices:**
```
path_r_pf = np.repeat(r_ss_pf,par.path_T)
path_w_pf = np.repeat(w_ss_pf,par.path_T)
path_K_pf,path_C_pf = path_CK_func(K_ss_pf,path_r_pf,path_w_pf,r_ss_pf,w_ss_pf,model)
print(f'C_ss: {C_ss_pf:.6f}')
print(f'C[0]: {path_C_pf[0]:.6f}')
print(f'C[-1]: {path_C_pf[-1]:.6f}')
assert np.isclose(C_ss_pf,path_C_pf[0])
```
**Shock paths** where interest rate deviate in one period:
```
dr = 1e-4
ts = np.array([0,20,40])
path_C_pf_shock = np.empty((ts.size,par.path_T))
path_K_pf_shock = np.empty((ts.size,par.path_T))
for i,t in enumerate(ts):
path_r_pf_shock = path_r_pf.copy()
path_r_pf_shock[t] += dr
K,C = path_CK_func(K_ss_pf,path_r_pf_shock,path_w_pf,r_ss_pf,w_ss_pf,model)
path_K_pf_shock[i,:] = K
path_C_pf_shock[i,:] = C
```
**Plot paths:**
```
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
ax.plot(np.arange(par.path_T),path_C_pf,'-o',ms=2,label=f'$r_t = r^{{\\ast}}$')
for i,t in enumerate(ts):
ax.plot(np.arange(par.path_T),path_C_pf_shock[i],'-o',ms=2,label=f'shock to $r_{{{t}}}$')
ax.set_xlim([0,50])
ax.set_xlabel('periods')
ax.set_ylabel('consumtion, $C_t$');
ax = fig.add_subplot(1,2,2)
ax.plot(np.arange(par.path_T),path_K_pf,'-o',ms=2,label=f'$r_t = r^{{\\ast}}$')
for i,t in enumerate(ts):
ax.plot(np.arange(par.path_T),path_K_pf_shock[i],'-o',ms=2,label=f'shock to $r_{{{t}}}$')
ax.legend(frameon=True)
ax.set_xlim([0,50])
ax.set_xlabel('$t$')
ax.set_ylabel('capital, $K_t$');
fig.tight_layout()
```
**Find transition path with shooting algorithm:**
```
# a. allocate
dT = 200
path_C_pf = np.empty(par.path_T)
path_K_pf = np.empty(par.path_T)
path_r_pf = np.empty(par.path_T)
path_w_pf = np.empty(par.path_T)
# b. settings
C_min = C_ss_pf
C_max = C_ss_pf + K_ss_pf
K_min = 1.5 # guess on lower consumption if below this
K_max = 3 # guess on higher consumption if above this
tol_pf = 1e-6
max_iter_pf = 5000
path_K_pf[0] = K_ss_pf # capital is pre-determined
# c. iterate
t = 0
it = 0
while True:
# i. update prices
path_r_pf[t] = model.implied_r(path_K_pf[t],path_Z[t])
path_w_pf[t] = model.implied_w(path_r_pf[t],path_Z[t])
# ii. consumption
if t == 0:
C0 = (C_min+C_max)/2
path_C_pf[t] = C0
else:
path_C_pf[t] = (1+path_r_pf[t])*par.beta*path_C_pf[t-1]
# iii. check for steady state
if path_K_pf[t] < K_min:
t = 0
C_max = C0
continue
elif path_K_pf[t] > K_max:
t = 0
C_min = C0
continue
elif t > 10 and np.sqrt((path_C_pf[t]-C_ss_pf)**2+(path_K_pf[t]-K_ss_pf)**2) < tol_pf:
path_C_pf[t:] = path_C_pf[t]
path_K_pf[t:] = path_K_pf[t]
for k in range(par.path_T):
path_r_pf[k] = model.implied_r(path_K_pf[k],path_Z[k])
path_w_pf[k] = model.implied_w(path_r_pf[k],path_Z[k])
break
# iv. update capital
path_K_pf[t+1] = (1+path_r_pf[t])*path_K_pf[t] + path_w_pf[t] - path_C_pf[t]
# v. increment
t += 1
it += 1
if it > max_iter_pf: break
```
**Plot deviations from steady state:**
```
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(np.arange(par.path_T),path_Z,'-o',ms=2)
ax.set_xlim([0,200])
ax.set_title('technology, $Z_t$')
ax = fig.add_subplot(2,2,2)
ax.plot(np.arange(par.path_T),path_K-model.par.kd_ss,'-o',ms=2,label='$\sigma_e = 0.5$')
ax.plot(np.arange(par.path_T),path_K_pf-K_ss_pf,'-o',ms=2,label='$\sigma_e = 0$')
ax.legend(frameon=True)
ax.set_title('capital, $k_t$')
ax.set_xlim([0,200])
ax = fig.add_subplot(2,2,3)
ax.plot(np.arange(par.path_T),path_r-model.par.r_ss,'-o',ms=2,label='$\sigma_e = 0.5$')
ax.plot(np.arange(par.path_T),path_r_pf-r_ss_pf,'-o',ms=2,label='$\sigma_e = 0$')
ax.legend(frameon=True)
ax.set_title('interest rate, $r_t$')
ax.set_xlim([0,200])
ax = fig.add_subplot(2,2,4)
ax.plot(np.arange(par.path_T),path_w-model.par.w_ss,'-o',ms=2,label='$\sigma_e = 0.5$')
ax.plot(np.arange(par.path_T),path_w_pf-w_ss_pf,'-o',ms=2,label='$\sigma_e = 0$')
ax.legend(frameon=True)
ax.set_title('wage, $w_t$')
ax.set_xlim([0,200])
fig.tight_layout()
```
| github_jupyter |
# LeNet Lab

Source: Yan LeCun
## Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
```
The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
```
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
```
## Visualize Data
View a sample from the dataset.
You do not need to modify this section.
```
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
```
## Preprocess Data
Shuffle the training data.
You do not need to modify this section.
```
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
```
## Setup TensorFlow
The `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.
You do not need to modify this section.
```
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
```
## TODO: Implement LeNet-5
Implement the [LeNet-5](http://yann.lecun.com/exdb/lenet/) neural network architecture.
This is the only cell you need to edit.
### Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
### Architecture
**Layer 1: Convolutional.** The output shape should be 28x28x6.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 14x14x6.
**Layer 2: Convolutional.** The output shape should be 10x10x16.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 5x5x16.
**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.
**Layer 3: Fully Connected.** This should have 120 outputs.
**Activation.** Your choice of activation function.
**Layer 4: Fully Connected.** This should have 84 outputs.
**Activation.** Your choice of activation function.
**Layer 5: Fully Connected (Logits).** This should have 10 outputs.
### Output
Return the result of the 2nd fully connected layer.
```
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
```
## Features and Labels
Train LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.
`x` is a placeholder for a batch of input images.
`y` is a placeholder for a batch of output labels.
You do not need to modify this section.
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
```
## Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
```
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
```
## Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
```
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
## Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
```
## Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
```
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
| github_jupyter |
```
import numpy as np
import scipy as sp
import scipy.interpolate
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats
import scipy.optimize
from scipy.optimize import curve_fit
import minkowskitools as mt
import importlib
importlib.reload(mt)
n=4000
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
connections = mt.get_connections(points, pval=2, radius=0.05)
quick_data = []
for i in range(1000):
n=1000
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
connections = mt.get_connections(points, pval=2, radius=0.1)
no_points = mt.perc_thresh_n(connections)
quick_data.append(no_points)
plt.hist(quick_data, cumulative=True, bins=100)
plt.gca().set(xlim=(0, 1000), xlabel='Number of Points', ylabel='Culmulative Density', title='Connectionn Threshold')
# plt.savefig('img/pval2r05.pdf')
plt.gca().set(xlim=(0, np.max(quick_data)))
plt.hist(quick_data, bins=100);
n=1000
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
mt.smallest_r(points, pval=2)
n=1000
trials = 100
all_results = {}
results = []
for i in range(trials):
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
results.append(mt.smallest_r(points, pval=2)[1])
plt.hist(results, cumulative=True, bins=100);
mt.r1_area2D(2)*(.05**2)*n
ns = [1000]
ps = [2]
mt.separate_perc_r(ns, ps, 'outputs/test_perc.txt', repeats=10)
import importlib
importlib.reload(mt)
data_dict = {}
for pval in [0.8, 1, 1.2]:
data_dict[pval] = []
n = 1000
r = 0.1
for i in range(1000):
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
connections = mt.get_connections(points, pval=pval, radius=r)
no_points = mt.perc_thresh_n(connections)
data_dict[pval].append(no_points)
for pval in [0.8, 1, 1.2]:
plt.hist(data_dict[pval], cumulative=True, bins=100, label=pval, alpha=.3);
plt.legend()
plt.gca().set(title='Number of Points for Connectedness', xlabel='Points', ylabel='Cumulative Frequency');
# plt.savefig('img/PointsCumul.pdf')
data_dict_r = {}
for pval in [0.8, 1, 1.2]:
data_dict_r[pval] = []
n = 1000
r = 0.1
for i in range(1000):
print(i, end=',')
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
r_min = smallest_r(points, pval)
data_dict_r[pval].append(r_min[1])
fig, [ax1, ax2] = plt.subplots(ncols=2, figsize=(14, 5))
for pval in [0.8, 1, 1.2]:
ax1.hist(data_dict_r[pval], cumulative=True, bins=100, label=pval, alpha=.3);
ax1.legend()
ax1.set(xlabel='r', ylabel='Cumulative Frequency')
# plt.savefig('img/RadCumul.pdf')
# suptitle='Minimum r for Connectedness'
apprx_thresh = [0.065, 0.068, 0.08]
ps = [1.2, 1, 0.8]
for p, thresh, col in zip(ps, apprx_thresh, ['k', 'g', 'b']):
rs = np.arange(0.05, 0.14, 0.01)
ys = 1000*(mt.r1_area2D(p)*rs*rs)
plt.scatter(thresh, 1000*(mt.r1_area2D(p)*thresh*thresh), c=col)
plt.plot(rs, ys, c=col, alpha=0.6)
plt.axvline(x=thresh, c=col, ls='--', label=p, alpha=0.6)
n=10
rand_points = np.random.uniform(size=(2, n-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 5))
for pval, col in zip([0.8, 1, 1.2], ['k', 'g', 'b']):
ax1.hist(data_dict_r[pval], bins=np.arange(0.05, 0.14, 0.0005), label=pval, alpha=.3, color=col, cumulative=1, histtype='step', lw=5)
hist_out = ax2.hist(data_dict_r[pval], bins=50, color=col, alpha=0.3, label=pval)
ys = hist_out[0]
xs = (hist_out[1][1:]+hist_out[1][:-1])/2
pt = thresh_calc(xs, ys, sig_fract=.8, n_av=5)[0]
ax1.axvline(x=pt, ls='--', alpha=0.6, c=col)
ax2.axvline(x=pt, ls='--', alpha=0.6, c=col)
ax1.axhline(y=500, alpha=0.2, c='r')
# popt, pcov = curve_fit(skewed, xs, ys)
# plt.plot(xs, skewed(xs, *popt))
ax1.set(xlim=(0.05, 0.12), xlabel='r', ylabel='Cumulative Frequency')
ax2.set(xlim=(0.05, 0.12), xlabel='r', ylabel='Frequency')
ax1.legend(loc='lower right')
ax2.legend()
plt.savefig('img/r_perc.pdf')
# plt.gca().set(title='Minimum r for Connectedness', xlabel='r', ylabel='Cumulative Frequency', xlim=(0.05, .1))
for pval in [0.8, 1, 1.2]:
hist_out = np.histogram(data_dict_r[pval], bins=50);
ys = hist_out[0]
xs = (hist_out[1][1:]+hist_out[1][:-1])/2
# popt, pcov = curve_fit(skewed, xs, ys)
plt.scatter(np.log(xs), np.log(ys))
ys = hist_out[0]
xs = (hist_out[1][1:]+hist_out[1][:-1])/2
popt, pcov = curve_fit(skewed, xs, ys)
plt.plot(xs, skewed(xs, *popt))
def skewed(x, a, b, c, d):
# (100*(xs-.06), 4, 50)
return d*sp.stats.skewnorm.pdf(a*x-b, c)
popt, pcov = curve_fit(skewed, xs, ys)
hist_out = plt.hist(data_dict_r[pval], bins=50, label=pval, alpha=.3)
plt.plot(xs, skewed(xs, *popt))
# plt.plot(xs, skewed(xs, 100, 6, 4, 50))
# plt.plot(xs, ys, label='Fit')
plt.legend()
popt
def moving_average(a, n=3) :
ret = np.cumsum(np.array(a))
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
def thresh_calc(data, sig_fract=.8, n_av=5, bins=50):
hist_data = np.histogram(data, bins=bins)
xs, ys = (hist_data[1]+hist_data[1])/2, hist_data[0]
smoothxs = (moving_average(xs, n=n_av))
smoothys = (moving_average(ys, n=n_av))
inds = np.where(smoothys > max(smoothys)*sig_fract)
vals, err = np.polyfit(smoothxs[inds], smoothys[inds], 2, cov=True)
stat_point = -.5*vals[1]/vals[0]
fract_err = np.sqrt(err[0, 0]/(vals[0]**2) + err[1, 1]/(vals[1]**2))
return stat_point, fract_err*stat_point
apprx_thresh = [450, 500, 600]
ps = [1.2, 1, 0.8]
for p, thresh, col in zip(ps, apprx_thresh, ['k', 'g', 'b']):
xs = np.arange(1000)
ys = xs*(mt.r1_area2D(p)*.1*.1)
plt.scatter(thresh, thresh*(mt.r1_area2D(p)*.1*.1), c=col)
plt.plot(xs, ys, c=col, alpha=0.6)
plt.axvline(x=thresh, c=col, ls='--', label=p, alpha=0.6)
def separate_perc_n(p, r, n_max=None):
if n_max==None:
n_max=int(4/(mt.r1_area2D(p)*r*r))
print(n_max)
rand_points = np.random.uniform(size=(2, n_max-2))
edge_points = np.array([[0.0, 1.0],[0.0, 1.0]])
points = np.concatenate((rand_points, edge_points), axis=1)
connections = mt.get_connections(points, radius=r, pval=p)
return mt.perc_thresh_n(connections)
def ensemble_perc_n(fileName, ps, rs, repeats=1, verbose=True):
for p, r in zip(ps, rs):
if verbose:
print(f'p:{p}, r:{r}')
for i in range(repeats):
if verbose:
print(i, end=' ')
thresh = separate_perc_n(p, r)
file1 = open("{}".format(fileName),"a")
file1.writelines(f'{p} - {r} - {thresh}\n')
file1.close()
if verbose:
print()
return fileName
ensemble_perc_n('new_test.txt', [.8, 1.2, 2], [0.2, 0.1, 0.05], repeats=10)
pd.read_csv('new_test.txt', header=None, delimiter=" - ")
p=.8
r=0.05
4/(mt.r1_area2D(p)*r*r)
pn1 = pd.read_csv('outputs/perc_n.txt', names=['p', 'r', 'n'], delimiter=" - ")
pn1.tail()
pn1['edges'] = pn1['n']*pn1['r']*pn1['r']*mt.kernel_area2D(pn1['p'])
plt.hist(pn1[pn1['edges'] < 2.95]['edges'], bins=50, cumulative=1);
# plt.hist(pn1['edges'], bins=50, cumulative=1);
plt.gca().set(xlabel='Average Number Edges from Node', ylabel='Cumulative Frequency', );
plt.hist(pn1[pn1['edges'] < 2.95]['edges'], bins=50, cumulative=0)
plt.gca().set(xlabel='Average Number Edges from Node', ylabel='Frequency', )
for bins in [50, 75, 100]:
plt.plot(np.arange(0.5, 0.95, 0.01), [thresh_calc(pn1[pn1['edges'] < 2.95]['edges'], sig_fract=elem, bins=bins)[0] for elem in np.arange(0.5, 0.95, 0.01)], label=f'{bins} bins')
plt.legend()
plt.gca().set(xlabel='Fraction for bars to be considered', ylabel='Percolation Threshold', );
# #input file
# fin = open('outputs/perc_r5000clean.txt', "rt")
# #output file to write the result to
# fout = open("outputs/perc_r5000clean2.txt", "wt")
# #for each line in the input file
# for line in fin:
# #read replace the string and write to output file
# fout.write(line.replace('-[[', '- [['))
# #close input and output files
# fin.close()
# fout.close()
pr1 = pd.read_csv('outputs/perc_r5000clean2.txt', names=['p', 'n', 'r', 'path'], delimiter=" - ")
pr1['edges'] = pr1['n']*pr1['r']*pr1['r']*mt.kernel_area2D(pr1['p'])
fig, ax = plt.subplots(figsize=(7, 7))
# axins = ax.inset_axes([5, 8, 150, 250])
axins = ax.inset_axes([0.5, 0.57, 0.5, 0.43])
hist_data = axins.hist(pr1['edges'], bins=100, label='Raw Data')
axins.legend(loc='upper right')
n_av = 5
sig_fract = .7
plot_fract = 0.1
xs, ys = (hist_data[1]+hist_data[1])/2, hist_data[0]
smoothxs = (moving_average(xs, n=n_av))
smoothys = (moving_average(ys, n=n_av))
inds = np.where(smoothys > max(smoothys)*sig_fract)
notinds = np.where(smoothys <= max(smoothys)*sig_fract)
[a, b, c], err = np.polyfit(smoothxs[inds], smoothys[inds], 2, cov=True)
# plt.plot(xs, vals[0]*xs*xs + vals[1]*xs + vals[2])
# plotx = xs[inds]
ax.scatter(smoothxs[inds], smoothys[inds], c='b', alpha=0.5, label='Points in Fit')
ax.scatter(smoothxs[notinds], smoothys[notinds], c='k', alpha=0.2, label='Smoothed Points')
plotx = smoothxs[inds]
lowerlim = max(smoothys)*plot_fract
quadx = np.arange((-b+np.sqrt(b*b - 4*a*(c-lowerlim)))/(2*a), (-b-np.sqrt(b*b - 4*a*(c-lowerlim)))/(2*a), 0.001)
quady = a*quadx*quadx + b*quadx + c
plotinds = np.where(quady > 0)
ax.axhline(max(smoothys)*sig_fract, color='r', alpha=0.5, ls='--', label=f'Fraction={sig_fract}')
ax.axvline(thresh_calc(pr1['edges'])[0], color='g', alpha=0.5, ls='--', label=f'Threshold')
ax.plot(quadx, quady, c='b', alpha=0.6, label='Quadratic Fit')
ax.legend(loc='best', bbox_to_anchor=(0.5, 0., 0.5, 0.45))
ax.set(xlabel='Average number of edges per node', ylabel='Frequency', title='Determining the Percolation Threshold');
plt.savefig('img/percthreshn5000.pdf')
ss = np.arange(0.2, 0.85, 0.01)
plt.plot(ss, [thresh_calc(pr1['edges'], sig_fract=s)[0] for s in ss])
r5000 = pd.read_csv('outputs/perc_r5000clean2.txt', names=['p', 'n', 'r', 'path'], delimiter=" - ")
r5000['e'] = mt.kernel_area2D(r5000['p'])*r5000['r']*r5000['r']*r5000['n']
ps = [0.4, 0.6, 0.8, 1.0]
threshs = [thresh_calc(r5000[np.abs(r5000['p']-p) < 0.01]['e'], sig_fract=0.6)[0] for p in ps]
plt.plot(ps, threshs)
r5000[np.abs(r5000['p']-1) < .01]
thresh_calc(r5000[np.abs(r5000['p']-p) < 0.01]['e'])[0]
thresh_calc(r5000[np.abs(r5000['p']-0.6) < .1]['e'], sig_fract=.6)
```
| github_jupyter |
# Praca domowa 3
## Ładowanie podstawowych pakietów
```
import pandas as pd
import numpy as np
import sklearn
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.model_selection import StratifiedKFold # used in crossvalidation
from sklearn.model_selection import KFold
import IPython
from time import time
```
## Krótki wstęp
Celem zadania jest by bazujac na danych meteorologicznych z australi sprawdzić i wytrenowac 3 różne modele. Równie ważnym celem zadania jest przejrzenie oraz zmiana tzn. hiperparamterów z każdego nich.
### Załadowanie danych
```
data = pd.read_csv("../../australia.csv")
```
### Przyjrzenie się danym
```
data.info()
```
Nie ma w danych żadnych braków, oraz są one przygotowane idealnie do uczenia maszynowego. Przyjżyjmy się jednak jak wygląda ramka.
```
data.head()
```
## Random Forest
**Załadowanie potrzebnych bibliotek**
```
from sklearn.ensemble import RandomForestClassifier
```
**Inicjalizowanie modelu**
```
rf_default = RandomForestClassifier()
```
**Hiperparametry**
```
params = rf_default.get_params()
params
```
**Zmiana kilku hiperparametrów**
```
params['n_estimators']=150
params['max_depth']=6
params['min_samples_leaf']=4
params['n_jobs']=4
params['random_state']=0
rf_modified = RandomForestClassifier()
rf_modified.set_params(**params)
```
## Extreme Gradient Boosting
**Załadowanie potrzebnych bibliotek**
```
from xgboost import XGBClassifier
```
**Inicjalizowanie modelu**
```
xgb_default = XGBClassifier()
```
**Hiperparametry**
```
params = xgb_default.get_params()
params
```
**Zmiana kilku hiperparametrów**
```
params['n_estimators']=150
params['max_depth']=6
params['n_jobs']=4
params['random_state']=0
xgb_modified = XGBClassifier()
xgb_modified.set_params(**params)
```
## Support Vector Machines
**Załadowanie potrzebnych bibliotek**
```
from sklearn.svm import SVC
```
**Inicjalizowanie modelu**
```
svc_default = SVC()
```
**Hiperparametry**
```
params = svc_default.get_params()
params
```
**Zmiana kilku hiperparametrów**
```
params['degree']=3
params['tol']=0.001
params['random_state']=0
svc_modified = SVC()
svc_modified.set_params(**params)
```
## Komentarz
W tym momencie otrzymaliśmy 3 modele z zmienionymi hiperparametrami, oraz ich domyślne odpowiedniki. Zobaczmy teraz jak zmieniły się rezultaty osiągane przez te modele i chociaż nie był to cel tego zadania, zobaczmy czy może udało nam się poprawić jakiś model.
## Porównanie
**Załadowanie potrzebnych bibliotek**
```
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_auc_score
```
### Funkcje pomocnicze
```
def cv_classifier(classifier,kfolds = 10, X = data.drop("RainTomorrow", axis = 1), y = data.RainTomorrow):
start_time = time()
scores ={}
scores["f1"]=[]
scores["accuracy"]=[]
scores["balanced_accuracy"]=[]
scores["precision"]=[]
scores["average_precision"]=[]
scores["roc_auc"]=[]
# Hardcoded crossvalidation metod, could be
cv= StratifiedKFold(n_splits=kfolds,shuffle=True,random_state=0)
for i, (train, test) in enumerate(cv.split(X, y)):
IPython.display.clear_output()
print(f"Model {i+1}/{kfolds}")
# Training model
classifier.fit(X.iloc[train, ], y.iloc[train], )
# Testing model
prediction = classifier.predict(X.iloc[test,])
# calculating and savings scores
scores["f1"].append( f1_score(y.iloc[test],prediction))
scores["accuracy"].append( accuracy_score(y.iloc[test],prediction))
scores["balanced_accuracy"].append( balanced_accuracy_score(y.iloc[test],prediction))
scores["precision"].append( precision_score(y.iloc[test],prediction))
scores["average_precision"].append( average_precision_score(y.iloc[test],prediction))
scores["roc_auc"].append( roc_auc_score(y.iloc[test],prediction))
IPython.display.clear_output()
print(f"Crossvalidation on {kfolds} folds done in {round((time()-start_time),2)}s")
return scores
def get_mean_scores(scores_dict):
means={}
for score_name in scores_dict:
means[score_name] = np.mean(scores_dict[score_name])
return means
def print_mean_scores(mean_scores_dict,precision=4):
for score_name in mean_scores_dict:
print(f"Mean {score_name} score is {round(mean_scores_dict[score_name]*100,precision)}%")
```
### Wyniki
Poniżej zamieszczam wyniki predykcji pokazanych wcześniej modeli. Dla kontrastu nauczyłem zmodyfikowane wersję klasyfikatorów jak i również te domyślne. Ze smutkiem muszę stwierdzić, że nie jestem najlepszy w strzelaniu, ponieważ parametry, które dobrałem znacznie pogarszają skutecznść każdego z modeli. Niemniej jednak by to stwierdzić musiałem sie posłóżyć pewnymi miarami. Są to:
* F1
* Accuracy
* Balanced Accuracy
* Precision
* Average Precision
* ROC AUC
Wszystkie modele zostały poddane 10 krotnej kroswalidacji, więc przedstawione wyniki są średnią. Kroswalidacja pozwala dokładniej ocenić skutecznosć modelu oraz wyciągajac z nich takie informacje jak odchylenie standardowe wyników, co daje nam możliowść dyskusji na temat działania modelu w skrajnych przypadkach.
### Random Forest
### Kroswalidacja modeli
```
scores_rf_default = cv_classifier(rf_default)
scores_rf_modified = cv_classifier(rf_modified)
mean_scores_rf_default = get_mean_scores(scores_rf_default)
mean_scores_rf_modified = get_mean_scores(scores_rf_modified)
```
**Random forest default**
```
print_mean_scores(mean_scores_rf_default,precision=2)
```
**Random forest modified**
```
print_mean_scores(mean_scores_rf_modified,precision=2)
```
## Extreme Gradient Boosting
### Kroswalidacja modeli
```
scores_xgb_default = cv_classifier(xgb_default)
scores_xgb_modified = cv_classifier(xgb_modified)
mean_scores_xgb_default = get_mean_scores(scores_xgb_default)
mean_scores_xgb_modified = get_mean_scores(scores_xgb_modified)
```
**XGBoost default**
```
print_mean_scores(mean_scores_xgb_default,precision=2)
```
**XGBoost modified**
```
print_mean_scores(mean_scores_xgb_modified,precision=2)
```
## Support Vector Machines
### Kroswalidacja modeli
**warning this takes a while**
```
scores_svc_default = cv_classifier(svc_default)
scores_svc_modified = cv_classifier(svc_modified)
mean_scores_svc_default = get_mean_scores(scores_svc_default)
mean_scores_svc_modified = get_mean_scores(scores_svc_modified)
```
**SVM default**
```
print_mean_scores(mean_scores_svc_default,precision=2)
```
**SVM modified**
```
print_mean_scores(mean_scores_svc_modified,precision=2)
```
## Podsumowanie
Wyniki random forest oraz xgboost były dośyć zbliżone i szczerze mówiąc dosyć słabe. Jeszcze gorzej wypadł SVM, co pewnie wielu nie zdziwi. Ma okropnie długi czas uczenia, ponad minuta na model. Wypada dużo gorzej niż pozostałe algorytmy, gdzie 10 modeli xgboost zostało wyszkolone w 41s. Natomiast wyniki random forest oraz xgboost są dosyć zbliżone. Gdybym jednak miał wybrać jeden z tych trzech modeli, by dalej go dostrajać na pewno zdecydowałbym się na xgboosta. Między innymi dlatego, że czas uczenia i testowania byłby dużo krótszy niż w przypadku random forest, oraz prawdopodobnie z odpowiednimi parametrami xgboost będzie sobie radził lepiej niż random forest.
Natomiast wybór najlepszej miary nie jest już taki prosty, a nawet śmiem twierdzić, że nie znalazłem takiej która zasługiwałaby na takie miano. Większość miar w niebanalny sposób reprezentuje jakąś cechę modelu. Natomiast jeżeli musiałbym się ograniczyć do jednej prawdopodobnie wybrałbym ROC_AUC. Z tego powodu, że przez użycie True Positive Rate i False Positive Rate jest ona całkiem logiczna (w przeciwieństwie do wielu), a zarazem wiąć pozwala dobrze tłumaczyć skutecznośc modeli.
# Część bonusowa - Regresja
### Przygotowanie danych
```
data2 = pd.read_csv('allegro-api-transactions.csv')
data2 = data2.drop(['lp','date'], axis = 1)
data2.head()
```
Dane są prawie gotowe do procesu czenia, trzeba jedynie poprawić `it_location` w którym mogą pojawić się powtórki w stylu *Warszawa* i *warszawa*, a następnie zakodować zmienne kategoryczne
```
data2.it_location = data2.it_location.str.lower()
data2.head()
encoding_columns = ['categories','seller','it_location','main_category']
```
## Kodowanie zmiennych kategorycznych
```
import category_encoders
from sklearn.preprocessing import OneHotEncoder
```
### Podział danych
Nie wykonam standardowego podziału na dane test i train, ponieważ w dalszej części dokumentu do oceny skutecznosci użytych kodowań posłużę się kroswalidacją. Pragnę zaznaczyć, ze prawdopodbnie najlepszą metodą tutaj byłoby rozbicie kategorii `categories` na 26 kolumn, zero-jedynkowych, jednak znacznie by to powiększyło rozmiar danych. Z dokładnie tego samego powodu nie wykonam one hot encodingu, tylko posłużę się kodowaniami, które nie powiększą rozmiatu danych.
```
X = data2.drop('price', axis = 1)
y = data2.price
```
## Target encoding
```
te = category_encoders.target_encoder.TargetEncoder(data2, cols = encoding_columns)
target_encoded = te.fit_transform(X,y)
target_encoded
```
## James-Stein Encoding
```
js = category_encoders.james_stein.JamesSteinEncoder(cols = encoding_columns)
encoded_js = js.fit_transform(X,y)
encoded_js
```
## Cat Boost Encoding
```
cb = category_encoders.cat_boost.CatBoostEncoder(cols = encoding_columns)
encoded_cb = cb.fit_transform(X,y)
encoded_cb
```
## Testowanie
```
from sklearn.metrics import r2_score, mean_squared_error
from sklearn import linear_model
def cv_encoding(model,kfolds = 10, X = data.drop("RainTomorrow", axis = 1), y = data.RainTomorrow):
start_time = time()
scores ={}
scores["r2_score"] = []
scores['RMSE'] = []
# Standard k-fold
cv = KFold(n_splits=kfolds,shuffle=False,random_state=0)
for i, (train, test) in enumerate(cv.split(X, y)):
IPython.display.clear_output()
print(f"Model {i+1}/{kfolds}")
# Training model
model.fit(X.iloc[train, ], y.iloc[train], )
# Testing model
prediction = model.predict(X.iloc[test,])
# calculating and savings score
scores['r2_score'].append( r2_score(y.iloc[test],prediction))
scores['RMSE'].append( mean_squared_error(y.iloc[test],prediction))
IPython.display.clear_output()
print(f"Crossvalidation on {kfolds} folds done in {round((time()-start_time),2)}s")
return scores
```
## Mierzenie skutecznosci kodowań
Zdecydowałem isę skorzystać z modelu regresji liniowej `Lasso` początkowo chciałem skorzystać z `Elastic Net`, ale jak się okazało zmienne nie sa ze sobą zbytnio powiązane, a to miał być główny powód do jego użycia.
```
corr=data2.corr()
fig, ax=plt.subplots(figsize=(9,6))
ax=sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap="PiYG", center=0, vmin=-1, vmax=1)
ax.set_title('Korelacje zmiennych')
plt.show();
```
### Wybór modelu liniowego
Określam go w tym miejscu, ponieważ w dalszych częściach dokumentu będę z niego wielokrotnie korzystał przy kroswalidacji
```
lasso = linear_model.Lasso()
```
## Wyniki target encodingu
```
target_encoding_scores = cv_encoding(model = lasso,kfolds=20, X = target_encoded, y = y)
target_encoding_scores_mean = get_mean_scores(target_encoding_scores)
target_encoding_scores_mean
```
## Wyniki James-Stein Encodingu
```
js_encoding_scores = cv_encoding(lasso, 20, encoded_js, y)
js_encoding_scores_mean = get_mean_scores(js_encoding_scores)
js_encoding_scores_mean
```
## Wyniki Cat Boost Encodingu
```
cb_encoding_scores = cv_encoding(lasso, 20 ,encoded_cb, y)
cb_encoding_scores_mean = get_mean_scores(cb_encoding_scores)
cb_encoding_scores_mean
```
## Porównanie
## Wyniki metryki r2
```
r2_data = [target_encoding_scores["r2_score"], js_encoding_scores["r2_score"], cb_encoding_scores["r2_score"]]
labels = ["Target", " James-Stein", "Cat Boost"]
fig, ax = plt.subplots(figsize = (12,9))
ax.set_title('Wyniki r2')
ax.boxplot(r2_data, labels = labels)
plt.show()
```
**Komentarz**
Widać, że użycie kodowania Jamesa-Steina pozwoliło modelowi dużo lepiej się dopasować do danych, jednak stwarza to ewentaulny problem nadmiernego dopasowania się do danych. Warto by było sprawdzić czy przez ten sposób kodowania nie dochodzi do dużo silniejszego overfittingu.
## Wynikii metryki RMSE
```
rmse_data = [target_encoding_scores["RMSE"], js_encoding_scores["RMSE"], cb_encoding_scores["RMSE"]]
labels = ["Target", " James-Stein", "Cat Boost"]
fig, ax = plt.subplots(figsize = (12,9))
ax.set_title('Wyniki RMSE w skali logarytmicznej')
ax.set_yscale('log')
ax.boxplot(rmse_data, labels = labels)
plt.show()
```
**Komentarz**
Najlepiej poradził sobie James-Stein Encoding, nic dziwnego ponieważ r2 wskazał nam już lepsze dopasowanie modelu.
## Podsumowanie
Kodowanie Jamesa-Steina osiąga dużo lepsze wyniki niż pozostałe dwa przez mnie wybrane. O ile nie dochodzi w tym przypadku do overfittingu, to w tej grupie z pewnością wybrałbym właśnie to kodowanie. Warto się jednak zastanowić nad kodowaniem one-hot, które w tym przypadku wydaje się bardzo naturale, jednakże wiąże się z kilkukrotnym powiększeniem danych.
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Implement depth-first traversals (in-order, pre-order, post-order) on a binary tree.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Can we assume we already have a Node class with an insert method?
* Yes
* What should we do with each node when we process it?
* Call an input method `visit_func` on the node
* Can we assume this fits in memory?
* Yes
## Test Cases
### In-Order Traversal
* 5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8
* 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
### Pre-Order Traversal
* 5, 2, 8, 1, 3 -> 5, 2, 1, 3, 8
* 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
### Post-Order Traversal
* 5, 2, 8, 1, 3 -> 1, 3, 2, 8, 5
* 1, 2, 3, 4, 5 -> 5, 4, 3, 2, 1
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_dfs/dfs_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
# %load ../bst/bst.py
class Node(object):
def __init__(self, data):
self.data = data
self.left = None
self.right = None
self.parent = None
def __repr__(self):
return str(self.data)
class Bst(object):
def __init__(self, root=None):
self.root = root
def insert(self, data):
if data is None:
raise TypeError('data cannot be None')
if self.root is None:
self.root = Node(data)
return self.root
else:
return self._insert(self.root, data)
def _insert(self, node, data):
if node is None:
return Node(data)
if data <= node.data:
if node.left is None:
node.left = self._insert(node.left, data)
node.left.parent = node
return node.left
else:
return self._insert(node.left, data)
else:
if node.right is None:
node.right = self._insert(node.right, data)
node.right.parent = node
return node.right
else:
return self._insert(node.right, data)
class BstDfs(Bst):
def in_order_traversal(self, node, visit_func):
if node is None:
return
self.in_order_traversal(node.left, visit_func)
visit_func(node)
self.in_order_traversal(node.right, visit_func)
def pre_order_traversal(self, node, visit_func):
if node is None:
return
visit_func(node)
self.pre_order_traversal(node.left, visit_func)
self.pre_order_traversal(node.right, visit_func)
def post_order_traversal(self,node, visit_func):
if node is None:
return
self.post_order_traversal(node.left, visit_func)
self.post_order_traversal(node.right, visit_func)
visit_func(node)
```
## Unit Test
```
%run ../utils/results.py
# %load test_dfs.py
from nose.tools import assert_equal
class TestDfs(object):
def __init__(self):
self.results = Results()
def test_dfs(self):
bst = BstDfs(Node(5))
bst.insert(2)
bst.insert(8)
bst.insert(1)
bst.insert(3)
bst.in_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 5, 8]")
self.results.clear_results()
bst.pre_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[5, 2, 1, 3, 8]")
self.results.clear_results()
bst.post_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[1, 3, 2, 8, 5]")
self.results.clear_results()
bst = BstDfs(Node(1))
bst.insert(2)
bst.insert(3)
bst.insert(4)
bst.insert(5)
bst.in_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 4, 5]")
self.results.clear_results()
bst.pre_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 4, 5]")
self.results.clear_results()
bst.post_order_traversal(bst.root, self.results.add_result)
assert_equal(str(self.results), "[5, 4, 3, 2, 1]")
print('Success: test_dfs')
def main():
test = TestDfs()
test.test_dfs()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_dfs/dfs_solution.ipynb) for a discussion on algorithms and code solutions.
| github_jupyter |
# Generative Adversarial Networks
Throughout most of this book, we've talked about how to make predictions.
In some form or another, we used deep neural networks learned mappings from data points to labels.
This kind of learning is called discriminative learning,
as in, we'd like to be able to discriminate between photos cats and photos of dogs.
Classifiers and regressors are both examples of discriminative learning.
And neural networks trained by backpropagation
have upended everything we thought we knew about discriminative learning
on large complicated datasets.
Classification accuracies on high-res images has gone from useless
to human-level (with some caveats) in just 5-6 years.
We'll spare you another spiel about all the other discriminative tasks
where deep neural networks do astoundingly well.
But there's more to machine learning than just solving discriminative tasks.
For example, given a large dataset, without any labels,
we might want to learn a model that concisely captures the characteristics of this data.
Given such a model, we could sample synthetic data points that resemble the distribution of the training data.
For example, given a large corpus of photographs of faces,
we might want to be able to generate a *new* photorealistic image
that looks like it might plausibly have come from the same dataset.
This kind of learning is called *generative modeling*.
Until recently, we had no method that could synthesize novel photorealistic images.
But the success of deep neural networks for discriminative learning opened up new possiblities.
One big trend over the last three years has been the application of discriminative deep nets
to overcome challenges in problems that we don't generally think of as supervised learning problems.
The recurrent neural network language models are one example of using a discriminative network (trained to predict the next character)
that once trained can act as a generative model.
In 2014, a young researcher named Ian Goodfellow introduced [Generative Adversarial Networks (GANs)](https://arxiv.org/abs/1406.2661) a clever new way to leverage the power of discriminative models to get good generative models.
GANs made quite a splash so it's quite likely you've seen the images before.
For instance, using a GAN you can create fake images of bedrooms, as done by [Radford et al. in 2015](https://arxiv.org/pdf/1511.06434.pdf) and depicted below.

At their heart, GANs rely on the idea that a data generator is good
if we cannot tell fake data apart from real data.
In statistics, this is called a two-sample test - a test to answer the question whether datasets $X = \{x_1, \ldots x_n\}$ and $X' = \{x_1', \ldots x_n'\}$ were drawn from the same distribution.
The main difference between most statistics papers and GANs is that the latter use this idea in a constructive way.
In other words, rather than just training a model to say 'hey, these two datasets don't look like they came from the same distribution', they use the two-sample test to provide training signal to a generative model.
This allows us to improve the data generator until it generates something that resembles the real data.
At the very least, it needs to fool the classifier. And if our classifier is a state of the art deep neural network.
As you can see, there are two pieces to GANs - first off, we need a device (say, a deep network but it really could be anything, such as a game rendering engine) that might potentially be able to generate data that looks just like the real thing.
If we are dealing with images, this needs to generate images.
If we're dealing with speech, it needs to generate audio sequences, and so on.
We call this the *generator network*. The second component is the *discriminator network*.
It attempts to distinguish fake and real data from each other.
Both networks are in competition with each other.
The generator network attempts to fool the discriminator network. At that point, the discriminator network adapts to the new fake data. This information, in turn is used to improve the generator network, and so on.
**Generator**
* Draw some parameter $z$ from a source of randomness, e.g. a normal distribution $z \sim \mathcal{N}(0,1)$.
* Apply a function $f$ such that we get $x' = G(u,w)$
* Compute the gradient with respect to $w$ to minimize $\log p(y = \mathrm{fake}|x')$
**Discriminator**
* Improve the accuracy of a binary classifier $f$, i.e. maximize $\log p(y=\mathrm{fake}|x')$ and $\log p(y=\mathrm{true}|x)$ for fake and real data respectively.

In short, there are two optimization problems running simultaneously, and the optimization terminates if a stalemate has been reached. There are lots of further tricks and details on how to modify this basic setting. For instance, we could try solving this problem in the presence of side information. This leads to cGAN, i.e. conditional Generative Adversarial Networks. We can change the way how we detect whether real and fake data look the same. This leads to wGAN (Wasserstein GAN), kernel-inspired GANs and lots of other settings, or we could change how closely we look at the objects. E.g. fake images might look real at the texture level but not so at the larger level, or vice versa.
Many of the applications are in the context of images. Since this takes too much time to solve in a Jupyter notebook on a laptop, we're going to content ourselves with fitting a much simpler distribution. We will illustrate what happens if we use GANs to build the world's most inefficient estimator of parameters for a Gaussian. Let's get started.
```
from __future__ import print_function
import matplotlib as mpl
from matplotlib import pyplot as plt
import mxnet as mx
from mxnet import gluon, autograd, nd
from mxnet.gluon import nn
import numpy as np
ctx = mx.cpu()
```
## Generate some 'real' data
Since this is going to be the world's lamest example, we simply generate data drawn from a Gaussian. And let's also set a context where we'll do most of the computation.
```
X = nd.random_normal(shape=(1000, 2))
A = nd.array([[1, 2], [-0.1, 0.5]])
b = nd.array([1, 2])
X = nd.dot(X, A) + b
Y = nd.ones(shape=(1000, 1))
# and stick them into an iterator
batch_size = 4
train_data = mx.io.NDArrayIter(X, Y, batch_size, shuffle=True)
```
Let's see what we got. This should be a Gaussian shifted in some rather arbitrary way with mean $b$ and covariance matrix $A^\top A$.
```
plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy())
plt.show()
print("The covariance matrix is")
print(nd.dot(A.T, A))
```
## Defining the networks
Next we need to define how to fake data. Our generator network will be the simplest network possible - a single layer linear model. This is since we'll be driving that linear network with a Gaussian data generator. Hence, it literally only needs to learn the parameters to fake things perfectly. For the discriminator we will be a bit more discriminating: we will use an MLP with 3 layers to make things a bit more interesting.
The cool thing here is that we have *two* different networks, each of them with their own gradients, optimizers, losses, etc. that we can optimize as we please.
```
# build the generator
netG = nn.Sequential()
with netG.name_scope():
netG.add(nn.Dense(2))
# build the discriminator (with 5 and 3 hidden units respectively)
netD = nn.Sequential()
with netD.name_scope():
netD.add(nn.Dense(5, activation='tanh'))
netD.add(nn.Dense(3, activation='tanh'))
netD.add(nn.Dense(2))
# loss
loss = gluon.loss.SoftmaxCrossEntropyLoss()
# initialize the generator and the discriminator
netG.initialize(mx.init.Normal(0.02), ctx=ctx)
netD.initialize(mx.init.Normal(0.02), ctx=ctx)
# trainer for the generator and the discriminator
trainerG = gluon.Trainer(netG.collect_params(), 'adam', {'learning_rate': 0.01})
trainerD = gluon.Trainer(netD.collect_params(), 'adam', {'learning_rate': 0.05})
```
## Setting up the training loop
We are going to iterate over the data a few times. To make life simpler we need a few variables
```
real_label = mx.nd.ones((batch_size,), ctx=ctx)
fake_label = mx.nd.zeros((batch_size,), ctx=ctx)
metric = mx.metric.Accuracy()
# set up logging
from datetime import datetime
import os
import time
```
## Training loop
```
stamp = datetime.now().strftime('%Y_%m_%d-%H_%M')
for epoch in range(10):
tic = time.time()
train_data.reset()
for i, batch in enumerate(train_data):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
# train with real_t
data = batch.data[0].as_in_context(ctx)
noise = nd.random_normal(shape=(batch_size, 2), ctx=ctx)
with autograd.record():
real_output = netD(data)
errD_real = loss(real_output, real_label)
fake = netG(noise)
fake_output = netD(fake.detach())
errD_fake = loss(fake_output, fake_label)
errD = errD_real + errD_fake
errD.backward()
trainerD.step(batch_size)
metric.update([real_label,], [real_output,])
metric.update([fake_label,], [fake_output,])
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
with autograd.record():
output = netD(fake)
errG = loss(output, real_label)
errG.backward()
trainerG.step(batch_size)
name, acc = metric.get()
metric.reset()
print('\nbinary training acc at epoch %d: %s=%f' % (epoch, name, acc))
print('time: %f' % (time.time() - tic))
noise = nd.random_normal(shape=(100, 2), ctx=ctx)
fake = netG(noise)
plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy())
plt.scatter(fake[:,0].asnumpy(), fake[:,1].asnumpy())
plt.show()
```
## Checking the outcome
Let's now generate some fake data and check whether it looks real.
```
noise = mx.nd.random_normal(shape=(100, 2), ctx=ctx)
fake = netG(noise)
plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy())
plt.scatter(fake[:,0].asnumpy(), fake[:,1].asnumpy())
plt.show()
```
## Conclusion
A word of caution here - to get this to converge properly, we needed to adjust the learning rates *very carefully*. And for Gaussians, the result is rather mediocre - a simple mean and covariance estimator would have worked *much better*. However, whenever we don't have a really good idea of what the distribution should be, this is a very good way of faking it to the best of our abilities. Note that a lot depends on the power of the discriminating network. If it is weak, the fake can be very different from the truth. E.g. in our case it had trouble picking up anything along the axis of reduced variance.
In summary, this isn't exactly easy to set and forget. One nice resource for dirty practioner's knowledge is [Soumith Chintala's handy list of tricks](https://github.com/soumith/ganhacks) for how to babysit GANs.
For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)
| github_jupyter |
*This notebook is part of course materials for CS 345: Machine Learning Foundations and Practice at Colorado State University.
Original versions were created by Asa Ben-Hur.
The content is availabe [on GitHub](https://github.com/asabenhur/CS345).*
*The text is released under the [CC BY-SA license](https://creativecommons.org/licenses/by-sa/4.0/), and code is released under the [MIT license](https://opensource.org/licenses/MIT).*
<img style="padding: 10px; float:right;" alt="CC-BY-SA icon.svg in public domain" src="https://upload.wikimedia.org/wikipedia/commons/d/d0/CC-BY-SA_icon.svg" width="125">
<a href="https://colab.research.google.com/github//asabenhur/CS345/blob/master/notebooks/module05_01_cross_validation.ipynb">
<img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%autosave 0
```
# Evaluating classifiers: cross validation
### Learning curves
Intuitively, the more data we have available, the more accurate our classifiers become. To demonstrate this, let's read in some data and evaluate a k-nearest neighbor classifier on a fixed test set with increasing number of training examples. The resulting curve of accuracy as a function of number of examples is called a **learning curve**.
```
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
X, y = load_digits(return_X_y=True)
training_sizes = [20, 40, 100, 200, 400, 600, 800, 1000, 1200]
# note the use of the stratify keyword: it makes it so that each
# class is equally represented in both train and test set
X_full_train, X_test, y_full_train, y_test = train_test_split(
X, y, test_size = len(y)-max(training_sizes),
stratify=y, random_state=1)
accuracy = []
for training_size in training_sizes :
X_train,_ , y_train,_ = train_test_split(
X_full_train, y_full_train, test_size =
len(y_full_train)-training_size+10, stratify=y_full_train)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
accuracy.append(np.sum((y_pred==y_test))/len(y_test))
plt.figure(figsize=(6,4))
plt.plot(training_sizes, accuracy, 'ob')
plt.xlabel('training set size')
plt.ylabel('accuracy')
plt.ylim((0.5,1));
```
It's also instructive to look at the numbers themselves:
```
print ("# training examples\t accuracy")
for i in range(len(accuracy)) :
print ("\t{:d}\t\t {:f}".format(training_sizes[i], accuracy[i]))
```
### Exercise
* What can you conclude from this plot?
* Why would you want to compute a learning curve on your data?
### Making better use of our data with cross validation
The discussion above demonstrates that it is best to have as large of a training set as possible. We also need to have a large enough test set, so that the accuracy estimates are accurate. How do we balance these two contradictory requirements? Cross-validation provides us a more effective way to make use of our data. Here it is:
**Cross validation**
* Randomly partition the data into $k$ subsets ("folds").
* Set one fold aside for evaluation and train a model on the remaining $k$ folds and evaluate it on the held-out fold.
* Repeat until each fold has been used for evaluation
* Compute accuracy by averaging over the accuracy estimates generated for each fold.
Here is an illustration of 8-fold cross validation:
<img style="padding: 10px; float:left;" alt="cross-validation by MBanuelos22 CC BY-SA 4.0" src="https://upload.wikimedia.org/wikipedia/commons/c/c7/LOOCV.gif">
width="600">
As you can see, this procedure is more expensive than dividing your data into train and test set. When dealing with relatively small datasets, which is when you want to use this procedure, this won't be an issue.
Typically cross-validation is used with the number of folds being in the range of 5-10. An extreme case is when the number of folds equals the number of training examples. This special case is called *leave-one-out cross-validation*.
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_validate
from sklearn.model_selection import cross_val_score
from sklearn import metrics
```
Let's use the scikit-learn breast cancer dataset to demonstrate the use of cross-validation.
```
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
```
A scikit-learn data object is container object with whose interesting attributes are:
* ‘data’, the data to learn,
* ‘target’, the classification labels,
* ‘target_names’, the meaning of the labels,
* ‘feature_names’, the meaning of the features, and
* ‘DESCR’, the full description of the dataset.
```
X = data.data
y = data.target
print('number of examples ', len(y))
print('number of features ', len(X[0]))
print(data.target_names)
print(data.feature_names)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4,
random_state=0)
classifier = KNeighborsClassifier(n_neighbors=3)
#classifier = LogisticRegression()
_ = classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
```
Let's compute the accuracy of our predictions:
```
np.mean(y_pred==y_test)
```
We can do the same using scikit-learn:
```
metrics.accuracy_score(y_test, y_pred)
```
Now let's compute accuracy using [cross_validate](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html) instead:
```
accuracy = cross_val_score(classifier, X, y, cv=5,
scoring='accuracy')
print(accuracy)
```
This yields an array containing the accuracy values for each fold.
When reporting your results, you will typically show the mean:
```
np.mean(accuracy)
```
The arguments of `cross_val_score`:
* A classifier (anything that satisfies the scikit-learn classifier API)
* data (features/labels)
* `cv` : an integer that specifies the number of folds (can be used in more sophisticated ways as we will see below).
* `scoring`: this determines which accuracy measure is evaluated for each fold. Here's a link to the [list of available measures](https://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter) in scikit-learn.
You can obtain accuracy for other metrics. *Balanced accuracy* for example, is appropriate when the data is unbalanced (e.g. when one class contains a much larger number of examples than other classes in the data).
```
accuracy = cross_val_score(classifier, X, y, cv=5,
scoring='balanced_accuracy')
np.mean(accuracy)
```
`cross_val_score` is somewhat limited, in that it simply returns a list of accuracy scores. In practice, we often want to have more information about what happened during training, and also to compute multiple accuracy measures.
`cross_validate` will provide you with that information:
```
results = cross_validate(classifier, X, y, cv=5,
scoring='accuracy', return_estimator=True)
print(results)
```
The object returned by `cross_validate` is a Python dictionary as the output suggests. To extract a specific piece of data from this object, simply access the dictionary with the appropriate key:
```
results['test_score']
```
If you would like to know the predictions made for each training example during cross-validation use [cross_val_predict](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_predict.html) instead:
```
from sklearn.model_selection import cross_val_predict
y_pred = cross_val_predict(classifier, X, y, cv=5)
metrics.accuracy_score(y, y_pred)
```
The above way of performing cross-validation doesn't always give us enough control on the process: we usually want our machine learning experiments be reproducible, and to be able to use the same cross-validation splits with multiple algorithms. The scikit-learn `KFold` and `StratifiedKFold` cross-validation generators are the way to achieve that.
`KFold` simply chooses a random subset of examples for each fold. This strategy can lead to cross-validation folds in which the classes are not well-represented as the following toy example demonstrates:
```
from sklearn.model_selection import StratifiedKFold, KFold
X_toy = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9,10], [11, 12]])
y_toy = np.array([0, 0, 1, 1, 1, 1])
cv = KFold(n_splits=2, random_state=3, shuffle=True)
for train_idx, test_idx in cv.split(X_toy, y_toy):
print("train:", train_idx, "test:", test_idx)
X_train, X_test = X_toy[train_idx], X_toy[test_idx]
y_train, y_test = y_toy[train_idx], y_toy[test_idx]
print(y_train)
```
`StratifiedKFold` addresses this issue by making sure that each class is represented in each fold in proportion to its overall fraction in the data. This is particularly important when one or more of the classes have few examples.
`StratifiedKFold` and `KFold` generate folds that can be used in conjunction with the cross-validation methods we saw above.
As an example, we will demonstrate the use of `StratifiedKFold` with `cross_val_score` on the breast cancer datast:
```
cv = StratifiedKFold(n_splits=5, random_state=1, shuffle=True)
accuracy = cross_val_score(classifier, X, y, cv=cv,
scoring='accuracy')
np.mean(accuracy)
```
For classification problems, `StratifiedKFold` is the preferred strategy. However, for regression problems `KFold` is the way to go.
#### Question
Why is `KFold` used in regression probelms rather than `StratifiedKFold`?
To clarify the distinction between the different methods of generating cross-validation folds and their different parameters let's look at the following figures:
```
# the code for the figure is adapted from
# https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html
np.random.seed(42)
cmap_data = plt.cm.Paired
cmap_cv = plt.cm.coolwarm
n_folds = 4
# Generate the data
X = np.random.randn(100, 10)
# generate labels - classes 0,1,2 and 10,30,60 examples, respectively
y = np.array([0] * 10 + [1] * 30 + [2] * 60)
def plot_cv_indices(cv, X, y, ax, n_folds):
"""plot the indices of a cross-validation object."""
# Generate the training/testing visualizations for each CV split
for ii, (tr, tt) in enumerate(cv.split(X=X, y=y)):
# Fill in indices with the training/test groups
indices = np.zeros(len(X))
indices[tt] = 1
# Visualize the results
ax.scatter(range(len(indices)), [ii + .5] * len(indices),
c=indices, marker='_', lw=15, cmap=cmap_cv,
vmin=-.2, vmax=1.2)
# Plot the data classes and groups at the end
ax.scatter(range(len(X)), [ii + 1.5] * len(X), c=y, marker='_', lw=15, cmap=cmap_data)
# Formatting
yticklabels = list(range(n_folds)) + ['class']
ax.set(yticks=np.arange(n_folds+2) + .5, yticklabels=yticklabels,
xlabel='index', ylabel="CV fold",
ylim=[n_folds+1.2, -.2], xlim=[0, 100])
ax.set_title('{}'.format(type(cv).__name__), fontsize=15)
return ax
```
Let's visualize the results of using `KFold` for fold generation:
```
fig, ax = plt.subplots()
cv = KFold(n_folds)
plot_cv_indices(cv, X, y, ax, n_folds);
```
As you can see, this naive way of using `KFold` can lead to highly undesirable splits into cross-validation folds.
Using `StratifiedKFold` addresses this to some extent:
```
fig, ax = plt.subplots()
cv = StratifiedKFold(n_folds)
plot_cv_indices(cv, X, y, ax, n_folds);
```
Using `StratifiedKFold` with shuffling of the examples is the preferred way of splitting the data into folds:
```
fig, ax = plt.subplots()
cv = StratifiedKFold(n_folds, shuffle=True)
plot_cv_indices(cv, X, y, ax, n_folds);
```
### Question
Consider the task of digitizing handwritten text (aka optical character recognition, or OCR). For each letter in the alphabet you have multiple labeled examples generated by the same writer. How would this setup affect the way you divide your examples into training and test sets, or when performing cross-validation?
### Summary and Discussion
In this notebook we discussed cross-validation as a more effective way to make use of limited amounts of data compared to the strategy of splitting data into train and test sets. For very large datasets where training is time consuming you might still opt for evaluation on a single test set.
| github_jupyter |
# Lecture 3.3: Anomaly Detection
[**Lecture Slides**](https://docs.google.com/presentation/d/1_0Z5Pc5yHA8MyEBE8Fedq44a-DcNPoQM1WhJN93p-TI/edit?usp=sharing)
This lecture, we are going to use gaussian distributions to detect anomalies in our emoji faces dataset
**Learning goals:**
- Introduce an anomaly detection problem
- Implement Gaussian distribution anomaly detection for images
- Debug the optimisation of a learning algorithm
- Discuss the imperfection of learning algorithms
- Acknowledge other outlier detection methods
## 1. Introduction
We have an `emoji_faces` dataset of all our favourite emojis. However, Skynet hates their friendly expressiveness, and wants to destroy emojis forever! 🙀 It sent _terminator robots_ from the future to invade our dataset. We must act fast, and detect them amongst the emojis to prevent the catastrophy.
Our challenge here, is that we don't watch many movies, so we don't have a clear idea of what those _terminators_ look like. 🤖 All we know, is that they look very different compared to emojis, and that only a handful managed to infiltrate our dataset.
This is a typical scenario of _anomaly detection_. We would like to identify rare examples that differ from our "normal" data points. We choose to use a Gaussian Distribution to model this "normality" and detect the killer robots.
## 2. Data Munging
First let's load the images using [pillow](https://pillow.readthedocs.io/en/stable/), like in lecture 2.5:
```
from PIL import Image
import glob
paths = glob.glob('emoji_faces/*.png')
images = [Image.open(path) for path in paths]
len(images)
```
We have 134 emoji faces, including a few terminator robots. We'll again be using the [sklearn](https://scikit-learn.org/) library to create our model. The interface is usually the same, and for gaussian anomaly detection, sklearn again expect a NumPy matrix where the rows are our images and the columns are the pixels. So we can apply the same transformations as notebook 3.2:
```
import numpy as np
arrays = [np.asarray(im) for im in images]
# 64 * 64 = 4096
vectors = [arr.reshape((4096,)) for arr in arrays]
data = np.stack(vectors)
```
## 3. Training
Next, we will create an [`EllipticEnvelope`](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.html) object. This will fit a multi-variate gaussian distribution to our data. It then allows us to pick a threshold to define an _ellipsoid_ decision boundary , and detect outliers.
Remember that we are using a _learning_ algorithm, which must therefore be _trained_ before it can be used. This is why we'll use the `.fit()` method first, before calling `.predict()`:
```
from sklearn.covariance import EllipticEnvelope
cov = EllipticEnvelope(random_state=0).fit(data)
```
😰 What's happening? Why is it stuck? Have the killer robots already taken over?
No need to panic, this kind of hiccup is very common when dealing with machine learning algorithms. We can kill the process (before it fries our laptop fan) by clicking the `stop` button ⬛️ in the notebook toolbar.
Most learning algorithms are based around an _optimisation_ procedure. This step is often iterative and stochastic, i.e it tries its statistical best to maximise the learning in incremental steps.
This process isn't fail proof:
* it can dramatically stop because of out of memory errors, or overflow errors 💥
* it can get stuck, e.g when the optimisation is too slow 🐌
* it can fail silently, and return wrong results 💩
ℹ️ We will encounter many of these failures throughout our ML experiments, so knowing how to overcome them is a part of the data scientist skillset.
Let's go back to our killer robot detection: the model fitting got _stuck_ , which suggests that something about our data was too much to handle. We find the following "notes" in the [official documentation](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.html#sklearn.covariance.EllipticEnvelope):
> Outlier detection from covariance estimation may break or not perform well in high-dimensional settings.
We recall that our images are $64 \times 64$ pixels, so $4096$ dimensions.... that's a lot. It seems a good candidate to explain why our multivariate gaussian distribution failed to fit our dataset. If only there was a way to reduce the dimensions of our data... 😏
Let's apply PCA to reduce the number of dimensions of our dataset. Our emoji faces dataset is smaller than the full emoji dataset, so 40 dimensions should suffice to explain its variance:
```
from sklearn.decomposition import PCA
pca = PCA(n_components=40)
pca.fit(data)
components = pca.transform(data)
components.shape
```
💪 Visualise the eigenvector images of our PCA model. You can use the code from lecture 3.2!
🧠 Can you explain what those eigenvector images represent? Why are they different than from the full emoji dataset?
Fantastic, we've managed to reduce the number of dimensions by 99%! Hopefully that should be enough to make our gaussian distribution fitting happy. Let's try again with the _principal components_ instead of the original data:
```
cov = EllipticEnvelope(random_state=0).fit(components)
```
😅 that was fast!
## 4. Prediction
We can now use our fitted gaussian distribution to detect the outliers in our `data`. For this, we use the `.predict()` method:
```
y = cov.predict(components)
y
```
`y` is our vector of predictions, where $1$ is a normal data point, and $-1$ is an anomaly. We can therefore iterate through our original `arrays` to find outliers:
```
outliers = []
for i in range(0, len(arrays)):
if y[i] == -1:
outliers.append(arrays[i])
len(outliers)
import matplotlib.pyplot as plt
fig, axs = plt.subplots(dpi=150, nrows=2, ncols=7)
for outlier, ax in zip(outliers, axs.flatten()):
ax.imshow(outlier, cmap='gray', vmin=0, vmax=255)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
THERE'S OUR TERMINATORS! 🤖 We can count 5 of them in total. Notice how some real emoji faces were also detected as outliers. This is perhaps a sign that we should change our _threshold_ , to make the ellipsoid decision boundary smaller.
In fact, we didn't even specify a threshold before, we just used the default value of `contamination=0.1` in the [`EllipticEnvelope`](https://scikit-learn.org/stable/modules/generated/sklearn.covariance.EllipticEnvelope.html) class. This represents our estimation of the proportion of data points which are outliers. Since it looks like we detected double the amount of actual anomalies, let's try again with `contamination=0.05`:
```
cov = EllipticEnvelope(random_state=0, contamination=0.05).fit(components)
y = cov.predict(components)
outliers = []
for i in range(0, len(arrays)):
if y[i] == -1:
outliers.append(arrays[i])
fig, axs = plt.subplots(dpi=150, nrows=1, ncols=7)
for outlier, ax in zip(outliers, axs.flatten()):
ax.imshow(outlier, cmap='gray', vmin=0, vmax=255)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
Better! `contamination=0.05` was a better choice of threshold, and we assessed this through _manual inspection_. This means we went through the results and used our human jugement to change the value of this _hyperparameter_.
ℹ️ Notice how our outlier detection is not _perfect_. Some emojis were also erroneously detected as anomalous killer robots. This can seem like a problem, or a sign that our model was malfunctioning. But, quite the contrary, _imperfection_ is a core aspect of all _learning_ algorithms. Instead of seeing the glass half-empty and looking at the outlier detector's mistakes, we should reflect on the task itself. It would have been almost impossible to detect those killer robot images using rule-based algorithms, and our model _accuracy_ was good _enough_ to save the emojis from Skynet. As data scientists, our goal is to make models which are accurate _enough_ to be useful, not to aim for perfect scores. We will revisit these topics later in the course when discussing Machine Learning Engineering 🛠
## 5. Analysis
We have detected the robot intruders and saved the emojis from a jealous AI from the future, all is good! We still want to better understand how anomaly detection defeated Skynet. For this, we would like to leverage our shiny new data visualization skills. Representing our dataset in space would allow us to identify its structures and hopefully understand how our gaussian distribution model identified terminators as "abnormal".
Our data is high dimensional, so we can use our trusted PCA once again to project it down to 2 dimensions. We understand that this will lose a lot of the variance of our data, but the results were still somewhat interpretable with the full emoji dataset, so let's go!
```
# Dimesionality reduction to 2
pca_model = PCA(n_components=2)
pca_model.fit(data) # fit the model
T = pca_model.transform(data) # transform the 'normalized model'
plt.scatter(T[:, 0], T[:, 1],
# use the predictions as color
c=y,
marker='o',
alpha=0.4
)
plt.title('Anomaly detection of the emoji faces dataset with PCA dimensionality reduction');
```
We can notice that most of the outliers are clearly _separable_ from the bulk of the dataset, even with only 2 principal components. One outlier is very much within the main cluster however. This could be explained by the dimensionality reduction, i.e that this point is separated from the cluster in other dimensions, or by the fact our threshold might be too permissive.
We can check this by displaying the images directly on the scatter plot:
```
from matplotlib import offsetbox
def plot_components(data, model, images=None, ax=None,
thumb_frac=0.05, cmap='gray'):
ax = ax or plt.gca()
proj = model.fit_transform(data)
ax.plot(proj[:, 0], proj[:, 1], '.k')
if images is not None:
min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2
shown_images = np.array([2 * proj.max(0)])
for i in range(data.shape[0]):
dist = np.sum((proj[i] - shown_images) ** 2, 1)
if np.min(dist) < min_dist_2:
# don't show points that are too close
continue
shown_images = np.vstack([shown_images, proj[i]])
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(images[i], cmap=cmap),
proj[i])
ax.add_artist(imagebox)
small_images = [im[::2, ::2] for im in arrays]
fig, ax = plt.subplots(figsize=(10, 10))
plot_components(data,
model=PCA(n_components=2),
images=small_images, thumb_frac=0.02)
plt.title('Anomaly detection of the emoji faces dataset with PCA dimensionality reduction');
```
We could probably have reduced the value of `contamination` further, since we can see how the killer robots are clearly "abnormal" with this visualisation. We also have a "feel" of how our gaussian distribution model could successfully detect them as outliers. Although remember that all of modeling magic happens in 40 dimensional space!
🧠🧠 Can you explain why it is not very useful to display the ellipsoid decision boundary of our anomaly detection model on this graph?
## 6. More Anomaly Detection
Anomaly detection is an active field in ML research, which combines supervised, unsupervised, non-linear, Bayesian, ... a whole bunch of methods! Each solution will have its pros and cons, and developing a production level outlier detection system will require empirically evaluating and comparing them. For a breakdown of the methods available in sklearn, check out this excellent [blogpost](https://sdsawtelle.github.io/blog/output/week9-anomaly-andrew-ng-machine-learning-with-python.html), or the [official documentation](https://scikit-learn.org/stable/modules/outlier_detection.html). For an in-depth view of modern anomaly detection, watch this [video](https://youtu.be/LRqX5uO5StA). And for everything else, feel free to experiment with this dataset or any other. Good luck on finding all the killer robots!
## 7. Summary
Today, we defined **anomaly detection**, and listed some of its common applications including fraud detection and data cleaning. We then described how to use **fitted Gaussian distributions** to identify outliers. This lead us to a discussion about the choice of **thresholds** and **hyperparameters**, where we went over a few different realistic scenarios. We then used a Gaussian distribution to remove terminator images from an emoji faces dataset. We learned how learning algorithms **fail** and that data scientists must know how to **debug** them. Finally, we used **PCA** to visualize our killer robot detection.
# Resources
## Core Resources
- [Anomaly detection algorithm](https://www.coursera.org/lecture/machine-learning/algorithm-C8IJp)
Andrew Ng's limpid breakdown of anomaly detection
## Additional Resources
- [A review of ML techniques for anomaly detection](https://youtu.be/LRqX5uO5StA)
More in depth review of modern techniques for anomaly detection
- [Anomaly Detection in sklearn](https://sdsawtelle.github.io/blog/output/week9-anomaly-andrew-ng-machine-learning-with-python.html)
Visual blogpost experimenting with the various outlier detection algorithms available in sklearn
- [sklearn official documentation - outlier detection](https://scikit-learn.org/stable/modules/outlier_detection.html)
| github_jupyter |
# Import Necessary Libraries
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.metrics import precision_score, recall_score
# display images
from IPython.display import Image
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
import seaborn as sns
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import style
# Algorithms
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC, LinearSVC
from sklearn.naive_bayes import GaussianNB
```
# Titanic
Titanic was a British passenger liner that sank in the North Atlantic Ocean in the early morning hours of 15 April 1912, after it collided with an iceberg during its maiden voyage from Southampton to New York City. There were an estimated 2,224 passengers and crew aboard the ship, and more than 1,500 died, making it one of the deadliest commercial peacetime maritime disasters in modern history. The RMS Titanic was the largest ship afloat at the time it entered service and was the second of three Olympic-class ocean liners operated by the White Star Line. The Titanic was built by the Harland and Wolff shipyard in Belfast. Thomas Andrews, her architect, died in the disaster.
```
# Image of Titanic ship
Image(filename='C:/Users/Nemgeree Armanonah/Documents/GitHub/Titanic/images/ship.jpeg')
```
# Getting the Data
```
#reading train.csv
data = pd.read_csv('./titanic datasets/train.csv')
data
```
## Exploring Data
```
data.info()
```
### Describe Statistics
Describe method is used to view some basic statistical details like PassengerId,Servived,Age etc.
```
data.describe()
```
### View All Features
```
data.columns.values
```
### What features could contribute to a high survival rate ?
To Us it would make sense if everything except ‘PassengerId’, ‘Ticket’ and ‘Name’ would be correlated with a high survival rate.
```
# defining variables
survived = 'survived'
not_survived = 'not survived'
# data to be plotted
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4))
women = data[data['Sex']=='female']
men = data[data['Sex']=='male']
# plot the data
ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False)
ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False)
ax.legend()
ax.set_title('Female')
ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False)
ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False)
ax.legend()
_ = ax.set_title('Male')
# count the null values
null_values = data.isnull().sum()
null_values
plt.plot(null_values)
plt.grid()
plt.show()
```
## Data Processing
```
def handle_non_numerical_data(df):
columns = df.columns.values
for column in columns:
text_digit_vals = {}
def convert_to_int(val):10
return text_digit_vals[val]
#print(column,df[column].dtype)
if df[column].dtype != np.int64 and df[column].dtype != np.float64:
column_contents = df[column].values.tolist()
#finding just the uniques
unique_elements = set(column_contents)
# great, found them.
x = 0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique] = x
x+=1
df[column] = list(map(convert_to_int,df[column]))
return df
y_target = data['Survived']
# Y_target.reshape(len(Y_target),1)
x_train = data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare','Embarked', 'Ticket']]
x_train = handle_non_numerical_data(x_train)
x_train.head()
fare = pd.DataFrame(x_train['Fare'])
# Normalizing
min_max_scaler = preprocessing.MinMaxScaler()
newfare = min_max_scaler.fit_transform(fare)
x_train['Fare'] = newfare
x_train
null_values = x_train.isnull().sum()
null_values
plt.plot(null_values)
plt.show()
# Fill the NAN values with the median values in the datasets
x_train['Age'] = x_train['Age'].fillna(x_train['Age'].median())
print("Number of NULL values" , x_train['Age'].isnull().sum())
x_train.head()
x_train['Sex'] = x_train['Sex'].replace('male', 0)
x_train['Sex'] = x_train['Sex'].replace('female', 1)
# print(type(x_train))
corr = x_train.corr()
corr.style.background_gradient()
def plot_corr(df,size=10):
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
# plot_corr(x_train)
x_train.corr()
corr.style.background_gradient()
# Dividing the data into train and test data set
X_train, X_test, Y_train, Y_test = train_test_split(x_train, y_target, test_size = 0.4, random_state = 40)
clf = RandomForestClassifier()
clf.fit(X_train, Y_train)
print(clf.predict(X_test))
print("Accuracy: ",clf.score(X_test, Y_test))
## Testing the model.
test_data = pd.read_csv('./titanic datasets/test.csv')
test_data.head(3)
# test_data.isnull().sum()
### Preprocessing on the test data
test_data = test_data[['Pclass', 'Age', 'Sex', 'SibSp', 'Parch', 'Fare', 'Ticket', 'Embarked']]
test_data = handle_non_numerical_data(test_data)
fare = pd.DataFrame(test_data['Fare'])
min_max_scaler = preprocessing.MinMaxScaler()
newfare = min_max_scaler.fit_transform(fare)
test_data['Fare'] = newfare
test_data['Fare'] = test_data['Fare'].fillna(test_data['Fare'].median())
test_data['Age'] = test_data['Age'].fillna(test_data['Age'].median())
test_data['Sex'] = test_data['Sex'].replace('male', 0)
test_data['Sex'] = test_data['Sex'].replace('female', 1)
print(test_data.head())
print(clf.predict(test_data))
from sklearn.model_selection import cross_val_predict
predictions = cross_val_predict(clf, X_train, Y_train, cv=3)
print("Precision:", precision_score(Y_train, predictions))
print("Recall:",recall_score(Y_train, predictions))
from sklearn.metrics import precision_recall_curve
# getting the probabilities of our predictions
y_scores = clf.predict_proba(X_train)
y_scores = y_scores[:,1]
precision, recall, threshold = precision_recall_curve(Y_train, y_scores)
def plot_precision_and_recall(precision, recall, threshold):
plt.plot(threshold, precision[:-1], "r-", label="precision", linewidth=5)
plt.plot(threshold, recall[:-1], "b", label="recall", linewidth=5)
plt.xlabel("threshold", fontsize=19)
plt.legend(loc="upper right", fontsize=19)
plt.ylim([0, 1])
plt.figure(figsize=(14, 7))
plot_precision_and_recall(precision, recall, threshold)
plt.axis([0.3,0.8,0.8,1])
plt.show()
def plot_precision_vs_recall(precision, recall):
plt.plot(recall, precision, "g--", linewidth=2.5)
plt.ylabel("recall", fontsize=19)
plt.xlabel("precision", fontsize=19)
plt.axis([0, 1.5, 0, 1.5])
plt.figure(figsize=(14, 7))
plot_precision_vs_recall(precision, recall)
plt.show()
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
predictions = cross_val_predict(clf, X_train, Y_train, cv=3)
confusion_matrix(Y_train, predictions)
```
True positive: 293 (We predicted a positive result and it was positive)
True negative: 143 (We predicted a negative result and it was negative)
False positive: 34 (We predicted a positive result and it was negative)
False negative: 64 (We predicted a negative result and it was positive)
| github_jupyter |
## Gaussian Transformation with Scikit-learn
Scikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.
The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformer
Another thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features.
## Important
Box-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.
In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so.
## In this demo
We will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from sklearn.preprocessing import FunctionTransformer, PowerTransformer
# load the data
data = pd.read_csv('../houseprice.csv')
data.head()
```
Let's select the numerical and positive variables in the dataset for this demonstration. As most of the transformations require the variables to be positive.
```
cols = []
for col in data.columns:
if data[col].dtypes != 'O' and col != 'Id': # if the variable is numerical
if np.sum(np.where(data[col] <= 0, 1, 0)) == 0: # if the variable is positive
cols.append(col) # append variable to the list
cols
# let's explore the distribution of the numerical variables
data[cols].hist(figsize=(20,20))
plt.show()
```
## Plots to assess normality
To visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
```
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist(bins=30)
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
```
### Logarithmic transformation
```
# create a log transformer
transformer = FunctionTransformer(np.log, validate=True)
# transform all the numerical and positive variables
data_t = transformer.transform(data[cols].fillna(1))
# Scikit-learn returns NumPy arrays, so capture in dataframe
# note that Scikit-learn will return an array with
# only the columns indicated in cols
data_t = pd.DataFrame(data_t, columns = cols)
# original distribution
diagnostic_plots(data, 'GrLivArea')
# transformed distribution
diagnostic_plots(data_t, 'GrLivArea')
# original distribution
diagnostic_plots(data, 'MSSubClass')
# transformed distribution
diagnostic_plots(data_t, 'MSSubClass')
```
### Reciprocal transformation
```
# create the transformer
transformer = FunctionTransformer(lambda x: 1/x, validate=True)
# also
# transformer = FunctionTransformer(np.reciprocal, validate=True)
# transform the positive variables
data_t = transformer.transform(data[cols].fillna(1))
# re-capture in a dataframe
data_t = pd.DataFrame(data_t, columns = cols)
# transformed variable
diagnostic_plots(data_t, 'GrLivArea')
# transformed variable
diagnostic_plots(data_t, 'MSSubClass')
```
### Square root transformation
```
transformer = FunctionTransformer(lambda x: x**(1/2), validate=True)
# also
# transformer = FunctionTransformer(np.sqrt, validate=True)
data_t = transformer.transform(data[cols].fillna(1))
data_t = pd.DataFrame(data_t, columns = cols)
diagnostic_plots(data_t, 'GrLivArea')
diagnostic_plots(data_t, 'MSSubClass')
```
### Exponential
```
transformer = FunctionTransformer(lambda x: x**(1/1.2), validate=True)
data_t = transformer.transform(data[cols].fillna(1))
data_t = pd.DataFrame(data_t, columns = cols)
diagnostic_plots(data_t, 'GrLivArea')
diagnostic_plots(data_t, 'MSSubClass')
```
### Box-Cox transformation
```
# create the transformer
transformer = PowerTransformer(method='box-cox', standardize=False)
# find the optimal lambda using the train set
transformer.fit(data[cols].fillna(1))
# transform the data
data_t = transformer.transform(data[cols].fillna(1))
# capture data in a dataframe
data_t = pd.DataFrame(data_t, columns = cols)
diagnostic_plots(data_t, 'GrLivArea')
diagnostic_plots(data_t, 'MSSubClass')
```
### Yeo-Johnson
Yeo-Johnson is an adaptation of Box-Cox that can also be used in negative value variables. So let's expand the list of variables for the demo, to include those that contain zero and negative values as well.
```
cols = [
'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual',
'OverallCond', 'MasVnrArea', 'BsmtFinSF1',
'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF',
'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath',
'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd',
'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF',
'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea',
'MiscVal', 'SalePrice'
]
# call the transformer
transformer = PowerTransformer(method='yeo-johnson', standardize=False)
# learn the lambda from the train set
transformer.fit(data[cols].fillna(1))
# transform the data
data_t = transformer.transform(data[cols].fillna(1))
# capture data in a dataframe
data_t = pd.DataFrame(data_t, columns = cols)
diagnostic_plots(data_t, 'GrLivArea')
diagnostic_plots(data_t, 'MSSubClass')
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
from jupyterthemes import jtplot
jtplot.style()
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import os
import json
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
from datetime import datetime
from matplotlib import pyplot as plt
%matplotlib inline
target_stocks = ['BANPU','IRPC','PTT','BBL','KBANK','SCB','AOT','THAI','CPF','MINT',
'TU','SCC','CPN','CK','CPALL','HMPRO','BDMS','BH','ADVANC','JAS','TRUE']
with open('../data/kaohoon.json') as json_data:
data = json.load(json_data)
len(data)
data[20]
entry = []
for i, row in tqdm_notebook(enumerate(data)):
if len(row['Stock Include']) > 1: continue
for stock in row['Stock Include']:
if stock in target_stocks:
entry.append([
row['Date'],
stock,
# row['Stock Include'],
row['Content']
])
df = pd.DataFrame.from_records(entry)
df[0] = pd.to_datetime(df[0], format='%Y-%m-%d')
df.columns = ['Date','Ticker','Text']
df.to_csv('../data/kaohoon.csv', index=False)
df.head()
```
## Money Channel
```
with open('../data/moneychanel.json') as json_data:
data = json.load(json_data)
len(data)
data[13]
entry = []
for i, row in tqdm_notebook(enumerate(data)):
if len(row['Stock Include']) > 1: continue
for stock in row['Stock Include']:
if stock in target_stocks:
entry.append([
row['Date'],
stock,
row['Content']
])
df = pd.DataFrame.from_records(entry)
df[0] = pd.to_datetime(df[0], format='%Y-%m-%d')
df.columns = ['Date','Ticker','Text']
df.head()
df.to_csv('../data/moneychanel.csv', index=False)
```
## Pantip
```
with open('../data/pantip.json') as json_data:
data = json.load(json_data)
len(data)
data[3]
data[3]['date']
data[3]['stock']
text = data[3]['head']+' '+data[3]['content']
text
for x in data[3]['comments']:
text += x['message']
text
entry = []
for i, row in tqdm_notebook(enumerate(data)):
if len(row['stock']) > 1: continue
for stock in row['stock']:
if stock in target_stocks:
text = row['head']+' '+row['content']
for comment in row['comments']:
text += comment['message']
entry.append([
row['date'],
stock,
text
])
df = pd.DataFrame.from_records(entry)
df[0] = pd.to_datetime(df[0], format='%Y-%m-%d')
df.columns = ['Date','Ticker','Text']
df.head()
df.to_csv('../data/pantip.csv', index=False)
```
## Twitter
```
with open('../data/twitter.json') as json_data:
data = json.load(json_data)
len(data)
data[0]
entry = []
for i, row in tqdm_notebook(enumerate(data)):
if len(row['Stock Include']) > 1: continue
for stock in row['Stock Include']:
if stock in target_stocks:
entry.append([
row['date'],
stock,
row['text']
])
df = pd.DataFrame.from_records(entry)
df[0] = pd.to_datetime(df[0], format='%Y-%m-%d')
df.columns = ['Date','Ticker','Text']
df.head()
df.to_csv('../data/twitter.csv', index=False)
```
| github_jupyter |
# In-Class Coding Lab: Iterations
The goals of this lab are to help you to understand:
- How loops work.
- The difference between definite and indefinite loops, and when to use each.
- How to build an indefinite loop with complex exit conditions.
- How to create a program from a complex idea.
# Understanding Iterations
Iterations permit us to repeat code until a Boolean expression is `False`. Iterations or **loops** allow us to write succint, compact code. Here's an example, which counts to 3 before [Blitzing the Quarterback in backyard American Football](https://www.quora.com/What-is-the-significance-of-counting-one-Mississippi-two-Mississippi-and-so-on):
```
i = 1
while i <= 3:
print(i,"Mississippi...")
i=i+1
print("Blitz!")
```
## Breaking it down...
The `while` statement on line 2 starts the loop. The code indented beneath it (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 `i <= 3` is `False`, at which time the program continues with line 5.
### Some Terminology
We call `i <=3` the loop's **exit condition**. The variable `i` inside the exit condition is the only thing that we can change to make the exit condition `False`, therefore it is the **loop control variable**. On line 4 we change the loop control variable by adding one to it, this is called an **increment**.
Furthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a **definite loop**. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.
If the loop control variable never forces the exit condition to be `False`, we have an **infinite loop**. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up.
```
## WARNING!!! INFINITE LOOP AHEAD
## IF YOU RUN THIS CODE YOU WILL NEED TO KILL YOUR BROWSER AND SHUT DOWN JUPYTER NOTEBOOK
i = 1
while i <= 3:
print(i,"Mississippi...")
# i=i+1
print("Blitz!")
```
### For loops
To prevent an infinite loop when the loop is definite, we use the `for` statement. Here's the same program using `for`:
```
for i in range(1,4):
print(i,"Mississippi...")
print("Blitz!")
```
One confusing aspect of this loop is `range(1,4)` why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.
### Now Try It
In the space below, Re-Write the above program to count from 10 to 15. Note: How many times will that loop?
```
# TODO Write code here
```
## Indefinite loops
With **indefinite loops** we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application.
The classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example:
```
name = ""
while name != 'mike':
name = input("Say my name! : ")
print("Nope, my name is not %s! " %(name))
```
The classic problem with indefinite loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set `name = ""` in line 1 so that line 2 start out as `True`. Also we have this wonky logic where when we say `'mike'` it still prints `Nope, my name is not mike!` before exiting.
### Break statement
The solution to this problem is to use the break statement. **break** tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:
```
while True:
if exit-condition:
break
```
Here's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.
```
while True:
name = input("Say my name!: ")
if name == 'mike':
break
print("Nope, my name is not %s!" %(name))
```
### Multiple exit conditions
This indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. Make sure to run this program a couple of times. First enter mike to exit the program, next enter the wrong name 3 times.
```
times = 0
while True:
name = input("Say my name!: ")
times = times + 1
if name == 'mike':
print("You got it!")
break
if times == 3:
print("Game over. Too many tries!")
break
print("Nope, my name is not %s!" %(name))
```
# Number sums
Let's conclude the lab with you writing your own program which
uses an indefinite loop. We'll provide the to-do list, you write the code. This program should ask for floating point numbers as input and stops looping when **the total of the numbers entered is over 100**, or **more than 5 numbers have been entered**. Those are your two exit conditions. After the loop stops print out the total of the numbers entered and the count of numbers entered.
```
## TO-DO List
#1 count = 0
#2 total = 0
#3 loop Indefinitely
#4. input a number
#5 increment count
#6 add number to total
#7 if count equals 5 stop looping
#8 if total greater than 100 stop looping
#9 print total and count
# Write Code here:
count = 0
total = 0
while true:
number = int(input("enter a number:"))
count = count + 1
total = total + number
if count == 5:
break
if total >100
break
print("total:", total,"count:", count)
```
| github_jupyter |
# 第8章: ニューラルネット
第6章で取り組んだニュース記事のカテゴリ分類を題材として,ニューラルネットワークでカテゴリ分類モデルを実装する.なお,この章ではPyTorch, TensorFlow, Chainerなどの機械学習プラットフォームを活用せよ.
## 70. 単語ベクトルの和による特徴量
***
問題50で構築した学習データ,検証データ,評価データを行列・ベクトルに変換したい.例えば,学習データについて,すべての事例$x_i$の特徴ベクトル$\boldsymbol{x}_i$を並べた行列$X$と正解ラベルを並べた行列(ベクトル)$Y$を作成したい.
$$
X = \begin{pmatrix}
\boldsymbol{x}_1 \\
\boldsymbol{x}_2 \\
\dots \\
\boldsymbol{x}_n \\
\end{pmatrix} \in \mathbb{R}^{n \times d},
Y = \begin{pmatrix}
y_1 \\
y_2 \\
\dots \\
y_n \\
\end{pmatrix} \in \mathbb{N}^{n}
$$
ここで,$n$は学習データの事例数であり,$\boldsymbol x_i \in \mathbb{R}^d$と$y_i \in \mathbb N$はそれぞれ,$i \in \{1, \dots, n\}$番目の事例の特徴量ベクトルと正解ラベルを表す.
なお,今回は「ビジネス」「科学技術」「エンターテイメント」「健康」の4カテゴリ分類である.$\mathbb N_{<4}$で$4$未満の自然数($0$を含む)を表すことにすれば,任意の事例の正解ラベル$y_i$は$y_i \in \mathbb N_{<4}$で表現できる.
以降では,ラベルの種類数を$L$で表す(今回の分類タスクでは$L=4$である).
$i$番目の事例の特徴ベクトル$\boldsymbol x_i$は,次式で求める.
$$\boldsymbol x_i = \frac{1}{T_i} \sum_{t=1}^{T_i} \mathrm{emb}(w_{i,t})$$
ここで,$i$番目の事例は$T_i$個の(記事見出しの)単語列$(w_{i,1}, w_{i,2}, \dots, w_{i,T_i})$から構成され,$\mathrm{emb}(w) \in \mathbb{R}^d$は単語$w$に対応する単語ベクトル(次元数は$d$)である.すなわち,$i$番目の事例の記事見出しを,その見出しに含まれる単語のベクトルの平均で表現したものが$\boldsymbol x_i$である.今回は単語ベクトルとして,問題60でダウンロードしたものを用いればよい.$300$次元の単語ベクトルを用いたので,$d=300$である.
$i$番目の事例のラベル$y_i$は,次のように定義する.
$$
y_i = \begin{cases}
0 & (\mbox{記事}\boldsymbol x_i\mbox{が「ビジネス」カテゴリの場合}) \\
1 & (\mbox{記事}\boldsymbol x_i\mbox{が「科学技術」カテゴリの場合}) \\
2 & (\mbox{記事}\boldsymbol x_i\mbox{が「エンターテイメント」カテゴリの場合}) \\
3 & (\mbox{記事}\boldsymbol x_i\mbox{が「健康」カテゴリの場合}) \\
\end{cases}
$$
なお,カテゴリ名とラベルの番号が一対一で対応付いていれば,上式の通りの対応付けでなくてもよい.
以上の仕様に基づき,以下の行列・ベクトルを作成し,ファイルに保存せよ.
+ 学習データの特徴量行列: $X_{\rm train} \in \mathbb{R}^{N_t \times d}$
+ 学習データのラベルベクトル: $Y_{\rm train} \in \mathbb{N}^{N_t}$
+ 検証データの特徴量行列: $X_{\rm valid} \in \mathbb{R}^{N_v \times d}$
+ 検証データのラベルベクトル: $Y_{\rm valid} \in \mathbb{N}^{N_v}$
+ 評価データの特徴量行列: $X_{\rm test} \in \mathbb{R}^{N_e \times d}$
+ 評価データのラベルベクトル: $Y_{\rm test} \in \mathbb{N}^{N_e}$
なお,$N_t, N_v, N_e$はそれぞれ,学習データの事例数,検証データの事例数,評価データの事例数である.
```
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00359/NewsAggregatorDataset.zip
!unzip NewsAggregatorDataset.zip
!wc -l ./newsCorpora.csv
!head -10 ./newsCorpora.csv
# 読込時のエラー回避のためダブルクォーテーションをシングルクォーテーションに置換
!sed -e 's/"/'\''/g' ./newsCorpora.csv > ./newsCorpora_re.csv
import pandas as pd
from sklearn.model_selection import train_test_split
# データの読込
df = pd.read_csv('./newsCorpora_re.csv', header=None, sep='\t', names=['ID', 'TITLE', 'URL', 'PUBLISHER', 'CATEGORY', 'STORY', 'HOSTNAME', 'TIMESTAMP'])
# データの抽出
df = df.loc[df['PUBLISHER'].isin(['Reuters', 'Huffington Post', 'Businessweek', 'Contactmusic.com', 'Daily Mail']), ['TITLE', 'CATEGORY']]
# データの分割
train, valid_test = train_test_split(df, test_size=0.2, shuffle=True, random_state=123, stratify=df['CATEGORY'])
valid, test = train_test_split(valid_test, test_size=0.5, shuffle=True, random_state=123, stratify=valid_test['CATEGORY'])
# 事例数の確認
print('【学習データ】')
print(train['CATEGORY'].value_counts())
print('【検証データ】')
print(valid['CATEGORY'].value_counts())
print('【評価データ】')
print(test['CATEGORY'].value_counts())
train.to_csv('drive/My Drive/nlp100/data/train.tsv', index=False, sep='\t', header=False)
valid.to_csv('drive/My Drive/nlp100/data/valid.tsv', index=False, sep='\t', header=False)
test.to_csv('drive/My Drive/nlp100/data/test.tsv', index=False, sep='\t', header=False)
import gdown
from gensim.models import KeyedVectors
# 学習済み単語ベクトルのダウンロード
url = "https://drive.google.com/uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM"
output = 'GoogleNews-vectors-negative300.bin.gz'
gdown.download(url, output, quiet=True)
# ダウンロードファイルのロード
model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
import string
import torch
def transform_w2v(text):
table = str.maketrans(string.punctuation, ' '*len(string.punctuation))
words = text.translate(table).split() # 記号をスペースに置換後、スペースで分割してリスト化
vec = [model[word] for word in words if word in model] # 1語ずつベクトル化
return torch.tensor(sum(vec) / len(vec)) # 平均ベクトルをTensor型に変換して出力
# 特徴ベクトルの作成
X_train = torch.stack([transform_w2v(text) for text in train['TITLE']])
X_valid = torch.stack([transform_w2v(text) for text in valid['TITLE']])
X_test = torch.stack([transform_w2v(text) for text in test['TITLE']])
print(X_train.size())
print(X_train)
# ラベルベクトルの作成
category_dict = {'b': 0, 't': 1, 'e':2, 'm':3}
y_train = torch.LongTensor(train['CATEGORY'].map(lambda x: category_dict[x]).values)
y_valid = torch.LongTensor(valid['CATEGORY'].map(lambda x: category_dict[x]).values)
y_test = torch.LongTensor(test['CATEGORY'].map(lambda x: category_dict[x]).values)
print(y_train.size())
print(y_train)
# 保存
torch.save(X_train, 'X_train.pt')
torch.save(X_valid, 'X_valid.pt')
torch.save(X_test, 'X_test.pt')
torch.save(y_train, 'y_train.pt')
torch.save(y_valid, 'y_valid.pt')
torch.save(y_test, 'y_test.pt')
```
## 71. 単層ニューラルネットワークによる予測
***
問題70で保存した行列を読み込み,学習データについて以下の計算を実行せよ.
$$
\hat{y}_1=softmax(x_1W),\\\hat{Y}=softmax(X_{[1:4]}W)
$$
ただし,$softmax$はソフトマックス関数,$X_{[1:4]}∈\mathbb{R}^{4×d}$は特徴ベクトル$x_1$,$x_2$,$x_3$,$x_4$を縦に並べた行列である.
$$
X_{[1:4]}=\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix}
$$
行列$W \in \mathbb{R}^{d \times L}$は単層ニューラルネットワークの重み行列で,ここではランダムな値で初期化すればよい(問題73以降で学習して求める).なお,$\hat{\boldsymbol y_1} \in \mathbb{R}^L$は未学習の行列$W$で事例$x_1$を分類したときに,各カテゴリに属する確率を表すベクトルである.
同様に,$\hat{Y} \in \mathbb{R}^{n \times L}$は,学習データの事例$x_1, x_2, x_3, x_4$について,各カテゴリに属する確率を行列として表現している.
```
from torch import nn
torch.manual_seed(0)
class SLPNet(nn.Module):
def __init__(self, input_size, output_size):
super().__init__()
self.fc = nn.Linear(input_size, output_size, bias=False) # Linear(入力次元数, 出力次元数)
nn.init.normal_(self.fc.weight, 0.0, 1.0) # 正規乱数で重みを初期化
def forward(self, x):
x = self.fc(x)
return x
model = SLPNet(300, 4)
y_hat_1 = torch.softmax(model.forward(X_train[:1]), dim=-1)
print(y_hat_1)
Y_hat = torch.softmax(model.forward(X_train[:4]), dim=-1)
print(Y_hat)
```
## 72. 損失と勾配の計算
***
学習データの事例$x_1$と事例集合$x_1$,$x_2$,$x_3$,$x_4$に対して,クロスエントロピー損失と,行列$W$に対する勾配を計算せよ.なお,ある事例$x_i$に対して損失は次式で計算される.
$$l_i=−log[事例x_iがy_iに分類される確率]$$
ただし,事例集合に対するクロスエントロピー損失は,その集合に含まれる各事例の損失の平均とする.
```
criterion = nn.CrossEntropyLoss()
l_1 = criterion(model.forward(X_train[:1]), y_train[:1]) # 入力ベクトルはsoftmax前の値
model.zero_grad() # 勾配をゼロで初期化
l_1.backward() # 勾配を計算
print(f'損失: {l_1:.4f}')
print(f'勾配:\n{model.fc.weight.grad}')
l = criterion(model.forward(X_train[:4]), y_train[:4])
model.zero_grad()
l.backward()
print(f'損失: {l:.4f}')
print(f'勾配:\n{model.fc.weight.grad}')
```
## 73. 確率的勾配降下法による学習
***
確率的勾配降下法(SGD: Stochastic Gradient Descent)を用いて,行列$W$を学習せよ.なお,学習は適当な基準で終了させればよい(例えば「100エポックで終了」など).
```
from torch.utils.data import Dataset
class CreateDataset(Dataset):
def __init__(self, X, y): # datasetの構成要素を指定
self.X = X
self.y = y
def __len__(self): # len(dataset)で返す値を指定
return len(self.y)
def __getitem__(self, idx): # dataset[idx]で返す値を指定
if isinstance(idx, torch.Tensor):
idx = idx.tolist()
return [self.X[idx], self.y[idx]]
from torch.utils.data import DataLoader
dataset_train = CreateDataset(X_train, y_train)
dataset_valid = CreateDataset(X_valid, y_valid)
dataset_test = CreateDataset(X_test, y_test)
dataloader_train = DataLoader(dataset_train, batch_size=1, shuffle=True)
dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False)
dataloader_test = DataLoader(dataset_test, batch_size=len(dataset_test), shuffle=False)
print(len(dataset_train))
print(next(iter(dataloader_train)))
# モデルの定義
model = SLPNet(300, 4)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
# 学習
num_epochs = 10
for epoch in range(num_epochs):
# 訓練モードに設定
model.train()
loss_train = 0.0
for i, (inputs, labels) in enumerate(dataloader_train):
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失を記録
loss_train += loss.item()
# バッチ単位の平均損失計算
loss_train = loss_train / i
# 検証データの損失計算
model.eval()
with torch.no_grad():
inputs, labels = next(iter(dataloader_valid))
outputs = model.forward(inputs)
loss_valid = criterion(outputs, labels)
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, loss_valid: {loss_valid:.4f}')
```
## 74. 正解率の計測
***
問題73で求めた行列を用いて学習データおよび評価データの事例を分類したとき,その正解率をそれぞれ求めよ.
```
def calculate_accuracy(model, X, y):
model.eval()
with torch.no_grad():
outputs = model(X)
pred = torch.argmax(outputs, dim=-1)
return (pred == y).sum().item() / len(y)
# 正解率の確認
acc_train = calculate_accuracy(model, X_train, y_train)
acc_test = calculate_accuracy(model, X_test, y_test)
print(f'正解率(学習データ):{acc_train:.3f}')
print(f'正解率(評価データ):{acc_test:.3f}')
```
## 75. 損失と正解率のプロット
***
問題73のコードを改変し,各エポックのパラメータ更新が完了するたびに,訓練データでの損失,正解率,検証データでの損失,正解率をグラフにプロットし,学習の進捗状況を確認できるようにせよ.
```
def calculate_loss_and_accuracy(model, criterion, loader):
model.eval()
loss = 0.0
total = 0
correct = 0
with torch.no_grad():
for inputs, labels in loader:
outputs = model(inputs)
loss += criterion(outputs, labels).item()
pred = torch.argmax(outputs, dim=-1)
total += len(inputs)
correct += (pred == labels).sum().item()
return loss / len(loader), correct / total
# モデルの定義
model = SLPNet(300, 4)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
# 学習
num_epochs = 30
log_train = []
log_valid = []
for epoch in range(num_epochs):
# 訓練モードに設定
model.train()
for i, (inputs, labels) in enumerate(dataloader_train):
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失と正解率の算出
loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train)
loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid)
log_train.append([loss_train, acc_train])
log_valid.append([loss_valid, acc_valid])
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}')
import numpy as np
from matplotlib import pyplot as plt
# 可視化
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].plot(np.array(log_train).T[0], label='train')
ax[0].plot(np.array(log_valid).T[0], label='valid')
ax[0].set_xlabel('epoch')
ax[0].set_ylabel('loss')
ax[0].legend()
ax[1].plot(np.array(log_train).T[1], label='train')
ax[1].plot(np.array(log_valid).T[1], label='valid')
ax[1].set_xlabel('epoch')
ax[1].set_ylabel('accuracy')
ax[1].legend()
plt.show()
```
## 76. チェックポイント
***
問題75のコードを改変し,各エポックのパラメータ更新が完了するたびに,チェックポイント(学習途中のパラメータ(重み行列など)の値や最適化アルゴリズムの内部状態)をファイルに書き出せ.
```
# モデルの定義
model = SLPNet(300, 4)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
# 学習
num_epochs = 10
log_train = []
log_valid = []
for epoch in range(num_epochs):
# 訓練モードに設定
model.train()
for inputs, labels in dataloader_train:
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失と正解率の算出
loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train)
loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid)
log_train.append([loss_train, acc_train])
log_valid.append([loss_valid, acc_valid])
# チェックポイントの保存
torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt')
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}')
```
## 77. ミニバッチ化
***
問題76のコードを改変し,$B$事例ごとに損失・勾配を計算し,行列$W$の値を更新せよ(ミニバッチ化).$B$の値を$1,2,4,8,…$と変化させながら,1エポックの学習に要する時間を比較せよ.
```
import time
def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs):
# dataloaderの作成
dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False)
# 学習
log_train = []
log_valid = []
for epoch in range(num_epochs):
# 開始時刻の記録
s_time = time.time()
# 訓練モードに設定
model.train()
for inputs, labels in dataloader_train:
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失と正解率の算出
loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train)
loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid)
log_train.append([loss_train, acc_train])
log_valid.append([loss_valid, acc_valid])
# チェックポイントの保存
torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt')
# 終了時刻の記録
e_time = time.time()
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec')
return {'train': log_train, 'valid': log_valid}
# datasetの作成
dataset_train = CreateDataset(X_train, y_train)
dataset_valid = CreateDataset(X_valid, y_valid)
# モデルの定義
model = SLPNet(300, 4)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
# モデルの学習
for batch_size in [2 ** i for i in range(11)]:
print(f'バッチサイズ: {batch_size}')
log = train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, 1)
```
## 78. GPU上での学習
***
問題77のコードを改変し,GPU上で学習を実行せよ.
```
def calculate_loss_and_accuracy(model, criterion, loader, device):
model.eval()
loss = 0.0
total = 0
correct = 0
with torch.no_grad():
for inputs, labels in loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss += criterion(outputs, labels).item()
pred = torch.argmax(outputs, dim=-1)
total += len(inputs)
correct += (pred == labels).sum().item()
return loss / len(loader), correct / total
def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs, device=None):
# GPUに送る
model.to(device)
# dataloaderの作成
dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False)
# 学習
log_train = []
log_valid = []
for epoch in range(num_epochs):
# 開始時刻の記録
s_time = time.time()
# 訓練モードに設定
model.train()
for inputs, labels in dataloader_train:
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失と正解率の算出
loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train, device)
loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid, device)
log_train.append([loss_train, acc_train])
log_valid.append([loss_valid, acc_valid])
# チェックポイントの保存
torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt')
# 終了時刻の記録
e_time = time.time()
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec')
return {'train': log_train, 'valid': log_valid}
# datasetの作成
dataset_train = CreateDataset(X_train, y_train)
dataset_valid = CreateDataset(X_valid, y_valid)
# モデルの定義
model = SLPNet(300, 4)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-1)
# デバイスの指定
device = torch.device('cuda')
for batch_size in [2 ** i for i in range(11)]:
print(f'バッチサイズ: {batch_size}')
log = train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, 1, device=device)
```
## 79. 多層ニューラルネットワーク
***
問題78のコードを改変し,バイアス項の導入や多層化など,ニューラルネットワークの形状を変更しながら,高性能なカテゴリ分類器を構築せよ.
```
from torch.nn import functional as F
class MLPNet(nn.Module):
def __init__(self, input_size, mid_size, output_size, mid_layers):
super().__init__()
self.mid_layers = mid_layers
self.fc = nn.Linear(input_size, mid_size)
self.fc_mid = nn.Linear(mid_size, mid_size)
self.fc_out = nn.Linear(mid_size, output_size)
self.bn = nn.BatchNorm1d(mid_size)
def forward(self, x):
x = F.relu(self.fc(x))
for _ in range(self.mid_layers):
x = F.relu(self.bn(self.fc_mid(x)))
x = F.relu(self.fc_out(x))
return x
from torch import optim
def calculate_loss_and_accuracy(model, criterion, loader, device):
model.eval()
loss = 0.0
total = 0
correct = 0
with torch.no_grad():
for inputs, labels in loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss += criterion(outputs, labels).item()
pred = torch.argmax(outputs, dim=-1)
total += len(inputs)
correct += (pred == labels).sum().item()
return loss / len(loader), correct / total
def train_model(dataset_train, dataset_valid, batch_size, model, criterion, optimizer, num_epochs, device=None):
# GPUに送る
model.to(device)
# dataloaderの作成
dataloader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=True)
dataloader_valid = DataLoader(dataset_valid, batch_size=len(dataset_valid), shuffle=False)
# スケジューラの設定
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, num_epochs, eta_min=1e-5, last_epoch=-1)
# 学習
log_train = []
log_valid = []
for epoch in range(num_epochs):
# 開始時刻の記録
s_time = time.time()
# 訓練モードに設定
model.train()
for inputs, labels in dataloader_train:
# 勾配をゼロで初期化
optimizer.zero_grad()
# 順伝播 + 誤差逆伝播 + 重み更新
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# 損失と正解率の算出
loss_train, acc_train = calculate_loss_and_accuracy(model, criterion, dataloader_train, device)
loss_valid, acc_valid = calculate_loss_and_accuracy(model, criterion, dataloader_valid, device)
log_train.append([loss_train, acc_train])
log_valid.append([loss_valid, acc_valid])
# チェックポイントの保存
torch.save({'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, f'checkpoint{epoch + 1}.pt')
# 終了時刻の記録
e_time = time.time()
# ログを出力
print(f'epoch: {epoch + 1}, loss_train: {loss_train:.4f}, accuracy_train: {acc_train:.4f}, loss_valid: {loss_valid:.4f}, accuracy_valid: {acc_valid:.4f}, {(e_time - s_time):.4f}sec')
# 検証データの損失が3エポック連続で低下しなかった場合は学習終了
if epoch > 2 and log_valid[epoch - 3][0] <= log_valid[epoch - 2][0] <= log_valid[epoch - 1][0] <= log_valid[epoch][0]:
break
# スケジューラを1ステップ進める
scheduler.step()
return {'train': log_train, 'valid': log_valid}
# datasetの作成
dataset_train = CreateDataset(X_train, y_train)
dataset_valid = CreateDataset(X_valid, y_valid)
# モデルの定義
model = MLPNet(300, 200, 4, 1)
# 損失関数の定義
criterion = nn.CrossEntropyLoss()
# オプティマイザの定義
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
# デバイスの指定
device = torch.device('cuda')
log = train_model(dataset_train, dataset_valid, 64, model, criterion, optimizer, 1000, device)
# 可視化
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].plot(np.array(log['train']).T[0], label='train')
ax[0].plot(np.array(log['valid']).T[0], label='valid')
ax[0].set_xlabel('epoch')
ax[0].set_ylabel('loss')
ax[0].legend()
ax[1].plot(np.array(log['train']).T[1], label='train')
ax[1].plot(np.array(log['valid']).T[1], label='valid')
ax[1].set_xlabel('epoch')
ax[1].set_ylabel('accuracy')
ax[1].legend()
plt.show()
def calculate_accuracy(model, X, y, device):
model.eval()
with torch.no_grad():
inputs = X.to(device)
outputs = model(inputs)
pred = torch.argmax(outputs, dim=-1).cpu()
return (pred == y).sum().item() / len(y)
# 正解率の確認
acc_train = calculate_accuracy(model, X_train, y_train, device)
acc_test = calculate_accuracy(model, X_test, y_test, device)
print(f'正解率(学習データ):{acc_train:.3f}')
print(f'正解率(評価データ):{acc_test:.3f}')
```
| github_jupyter |
# Analyse a series
<div class="alert alert-block alert-warning">
<b>Under construction</b>
</div>
```
import os
import pandas as pd
from IPython.display import Image as DImage
from IPython.core.display import display, HTML
import series_details
# Plotly helps us make pretty charts
import plotly.offline as py
import plotly.graph_objs as go
# Make sure data directory exists
os.makedirs('../../data/RecordSearch/images', exist_ok=True)
# This lets Plotly draw charts in cells
py.init_notebook_mode()
```
This notebook is for analysing a series that you've already harvested. If you haven't harvested any data yet, then you need to go back to the ['Harvesting a series' notebook](Harvesting series.ipynb).
```
# What series do you want to analyse?
# Insert the series id between the quotes.
series = 'J2483'
# Load the CSV data for the specified series into a dataframe. Parse the dates as dates!
df = pd.read_csv('../data/RecordSearch/{}.csv'.format(series.replace('/', '-')), parse_dates=['start_date', 'end_date'])
```
Remember that you can download harvested data from the workbench [data directory](../data/RecordSearch).
## Get some summary data
We're going to create a simple summary of some of the main characteristics of the series, as reflected in the harvested files.
```
# We're going to assemble some summary data about the series in a 'summary' dictionary
# Let's create the dictionary and add the series identifier
summary = {'series': series}
# The 'shape' property returns the number of rows and columns. So 'shape[0]' gives us the number of items harvested.
summary['total_items'] = df.shape[0]
print(summary['total_items'])
# Get the frequency of the different access status categories
summary['access_counts'] = df['access_status'].value_counts().to_dict()
print(summary['access_counts'])
# Get the number of files that have been digitised
summary['digitised_files'] = len(df.loc[df['digitised_status'] == True])
print(summary['digitised_files'])
# Get the number of individual pages that have been digitised
summary['digitised_pages'] = df['digitised_pages'].sum()
print(summary['digitised_pages'])
# Get the earliest start date
start = df['start_date'].min()
try:
summary['date_from'] = start.year
except AttributeError:
summary['date_from'] = None
print(summary['date_from'])
# Get the latest end date
end = df['end_date'].max()
try:
summary['date_to'] = end.year
except AttributeError:
summary['date_to'] = None
print(summary['date_to'])
# Let's display all the summary data
print('SERIES: {}'.format(summary['series']))
print('Number of items: {:,}'.format(summary['total_items']))
print('Access status:')
for status, total in summary['access_counts'].items():
print(' {}: {:,}'.format(status, total))
print('Contents dates: {} to {}'.format(summary['date_from'], summary['date_to']))
print('Digitised files: {:,}'.format(summary['digitised_files']))
print('Digitised pages: {:,}'.format(summary['digitised_pages']))
```
Note that a slightly enhanced version of the code above is available in the `series_details` module that you can import into any notebook. So to create a summary of a series you can just:
```
# Import the module
import series_details
# Call display_series() providing the series name and the dataframe
series_details.display_summary(series, df)
```
## Plot the contents dates
Plotting the dates is a bit tricky. Each file can have both a start date and an end date. So if we want to plot the years covered by a file, we need to include all the years between the start and end dates. Also dates can be recorded at different levels of granularity, for specific days to just years. And sometimes there are no end dates recorded at all – what does this mean?
The code in the cell below does a few things:
* It fills any empty end dates with the start date from the same item. This probably means some content years will be missed, but it's the only date we can be certain of.
* It loops through all the rows in the dataframe, then for each row it extracts the years between the start and end date. Currently this looks to see if the 1 January is covered by the date range, so if there's an exact start date after 1 January I don't think it will be captured. I need to investigate this further.
* It combines all of the years into one big series and then totals up the frquency of each year.
I'm sure this is not perfect, but it seems to produce useful results.
```
# Fill any blank end dates with start dates
df['end_date'] = df[['end_date']].apply(lambda x: x.fillna(value=df['start_date']))
# This is a bit tricky.
# For each item we want to find the years that it has content from -- ie start_year <= year <= end_year.
# Then we want to put all the years from all the items together and look at their frequency
years = []
for row in df.itertuples(index=False):
try:
years_in_range = pd.date_range(start=row.start_date, end=row.end_date, freq='AS').year.to_series()
except ValueError:
# No start date
pass
else:
years.append(years_in_range)
year_counts = pd.concat(years).value_counts()
# Put the resulting series in a dataframe so it looks pretty.
year_totals = pd.DataFrame(year_counts)
# Sort results by year
year_totals.sort_index(inplace=True)
# Display the results
year_totals.style.format({0: '{:,}'})
# Let's graph the frequency of content years
plotly_data = [go.Bar(
x=year_totals.index.values, # The years are the index
y=year_totals[0]
)]
# Add some labels
layout = go.Layout(
title='Content dates',
xaxis=dict(
title='Year'
),
yaxis=dict(
title='Number of items'
)
)
# Create a chart
fig = go.Figure(data=plotly_data, layout=layout)
py.iplot(fig, filename='series-dates-bar')
```
Note that a slightly enhanced version of the code above is available in the series_details module that you can import into any notebook. So to create a summary of a series you can just:
```
# Import the module
import series_details
# Call plot_series() providing the series name and the dataframe
fig = series_details.plot_dates(df)
py.iplot(fig)
```
## Filter by words in file titles
```
# Find titles containing a particular phrase -- in this case 'wife'
# This creates a new dataframe
# Try changing this to filter for other words
search_term = 'wife'
df_filtered = df.loc[df['title'].str.contains(search_term, case=False)].copy()
df_filtered
# We can plot this filtered dataframe just like the series
fig = series_details.plot_dates(df)
py.iplot(fig)
# Save the new dataframe as a csv
df_filtered.to_csv('../data/RecordSearch/{}-{}.csv'.format(series.replace('/', '-'), search_term))
# Find titles containing one of two words -- ie an OR statement
# Try changing this to filter for other words
df_filtered = df.loc[df['title'].str.contains('chinese', case=False) | df['title'].str.contains(r'\bah\b', case=False)].copy()
df_filtered
```
## Filter by date range
```
start_year = '1920'
end_year = '1930'
df_filtered = df[(df['start_date'] >= start_year) & (df['end_date'] <= end_year)]
df_filtered
```
## N-gram frequencies in file titles
```
# Import TextBlob for text analysis
from textblob import TextBlob
import nltk
stopwords = nltk.corpus.stopwords.words('english')
# Combine all of the file titles into a single string
title_text = a = df['title'].str.lower().str.cat(sep=' ')
blob = TextBlob(title_text)
words = [[word, count] for word, count in blob.lower().word_counts.items() if word not in stopwords]
word_counts = pd.DataFrame(words).rename({0: 'word', 1: 'count'}, axis=1).sort_values(by='count', ascending=False)
word_counts[:25].style.format({'count': '{:,}'}).bar(subset=['count'], color='#d65f5f').set_properties(subset=['count'], **{'width': '300px'})
def get_ngram_counts(text, size):
blob = TextBlob(text)
# Extract n-grams as WordLists, then convert to a list of strings
ngrams = [' '.join(ngram).lower() for ngram in blob.lower().ngrams(size)]
# Convert to dataframe then count values and rename columns
ngram_counts = pd.DataFrame(ngrams)[0].value_counts().rename_axis('ngram').reset_index(name='count')
return ngram_counts
def display_top_ngrams(text, size):
ngram_counts = get_ngram_counts(text, size)
# Display top 25 results as a bar chart
display(ngram_counts[:25].style.format({'count': '{:,}'}).bar(subset=['count'], color='#d65f5f').set_properties(subset=['count'], **{'width': '300px'}))
display_top_ngrams(title_text, 2)
display_top_ngrams(title_text, 4)
```
| github_jupyter |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
cities_df = pd.read_csv('../output_data/cities.csv')
cities_df.dropna(inplace = True)
cities_df.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = cities_df[["Lat", "Lng"]]
humidity = cities_df["Humidity"]
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=150,
point_radius=3)
fig.add_layer(heat_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
ideal_df = cities_df[cities_df["Max Temp"].lt(80) &
cities_df["Max Temp"].gt(70) &
cities_df["Wind Speed"].lt(10) &
cities_df["Cloudiness"].eq(0) &
cities_df["Humidity"].lt(80) &
cities_df["Humidity"].gt(30)]
ideal_df
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = ideal_df[["City", "Lat", "Lng", "Country"]].reset_index(drop=True)
hotel_df["Hotel Name"] = ""
params = {
"radius": 5000,
"types": "lodging",
"keyword": "hotel",
"key": g_key
}
for index, row in hotel_df.iterrows():
lat = row["Lat"]
lng = row["Lng"]
params["location"] = f"{lat},{lng}"
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
name_address = requests.get(base_url, params=params).json()
try:
hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
except (KeyError, IndexError):
hotel_df.loc[index, "Hotel Name"] = "NA"
print("Couldn't find a hotel here at " + row["City"] + ", " + row["Country"])
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
hover_info = [f"{row['City']}, {row['Country']}" for index,row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
markers = gmaps.marker_layer(
locations,
hover_text=hover_info,
info_box_content=hotel_info)
# Add marker layer ontop of heat map
fig.add_layer(markers)
fig.add_layer(heat_layer)
# Display figure
fig
```
| github_jupyter |
# SLU07 - Regression with Linear Regression: Example notebook
# 1 - Writing linear models
In this section you have a few examples on how to implement simple and multiple linear models.
Let's start by implementing the following:
$$y = 1.25 + 5x$$
```
def first_linear_model(x):
"""
Implements y = 1.25 + 5*x
Args:
x : float - input of model
Returns:
y : float - output of linear model
"""
y = 1.25 + 5 * x
return y
first_linear_model(1)
```
You should be thinking that this is too easy. So let's generalize it a bit. We'll write the code for the next equation:
$$ y = a + bx $$
```
def second_linear_model(x, a, b):
"""
Implements y = a + b * x
Args:
x : float - input of model
a : float - intercept of model
b : float - coefficient of model
Returns:
y : float - output of linear model
"""
y = a + b * x
return y
second_linear_model(1, 1.25, 5)
```
Still very simple, right? Now what if we want to have a linear model with multiple variables, such as this one:
$$ y = a + bx_1 + cx_2 + dx_3 $$
You can follow the same logic and just write the following:
```
def first_multiple_linear_model(x_1, x_2, x_3, a, b, c, d):
"""
Implements y = a + b * x_1 + c * x_2 + d * x_3
Args:
x_1 : float - first input of model
x_2 : float - second input of model
x_3 : float - third input of model
a : float - intercept of model
b : float - first coefficient of model
c : float - second coefficient of model
d : float - third coefficient of model
Returns:
y : float - output of linear model
"""
y = a + b * x_1 + c * x_2 + d * x_3
return y
first_multiple_linear_model(1.0, 1.0, 1.0, .5, .2, .1, .4)
```
However, you should already be seeing the problem. The bigger our model gets, the more variables we need to consider, so this is clearly not efficient. Now let's write the generic form for a linear model:
$$ y = w_0 + \sum_{i=1}^{N} w_i x_i$$
And we will implement the inputs and outputs of the model as vectors:
```
def second_multiple_linear_model(x, w):
"""
Implements y = w_0 + sum(x_i*w_i) (where i=1...N)
Args:
x : vector of input features with size N-1
w : vector of model weights with size N
Returns:
y : float - output of linear model
"""
w_0 = w[0]
y = w_0
for i in range(1, len(x)+1):
y += x[i-1]*w[i]
return y
second_multiple_linear_model([1.0, 1.0, 1.0], [.5, .2, .1, .4])
```
You could go even one step further and use numpy to vectorize these computations. You can represent both vectors as numpy arrays and just do the same calculation:
```
import numpy as np
def vectorized_multiple_linear_model(x, w):
"""
Implements y = w_0 + sum(x_i*w_i) (where i=1...N)
Args:
x : numpy array with shape (N-1, ) of inputs
w : numpy array with shape (N, ) of model weights
Returns:
y : float - output of linear model
"""
y = w[0] + x*w[1:]
vectorized_multiple_linear_model(np.array([1.0, 1.0, 1.0]), np.array([.5, .2, .1, .4]))
```
Read more about numpy array and its manipulation at the end of this example notebook. This will be necessary as you will be requested to implement these types of models in a way that they can compute several samples with many features at once.
<br>
<br>
# 2 - Using sklearn's LinearRegression
The following cells show you how to use the LinearRegression solver of the scikitlearn library. We'll start by creating some fake data to use in these examples:
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
X = np.arange(-10, 10) + np.random.rand(20)
y = 1.12 + .75 * X + 2. * np.random.rand(20)
plt.xlim((-10, 10))
plt.ylim((-20, 20))
plt.plot(X, y, 'b.')
```
## 2.1 Training the model
We will now use the base data created and show you how to fit the scikitlearn LinearRegression model with the data:
```
from sklearn.linear_model import LinearRegression
# Since our numpy array has only 1 dimension, we need reshape
# it to become a column vector - which corresponds to 1 feature
# and N samples
X = X.reshape(-1, 1)
lr = LinearRegression()
lr.fit(X, y)
```
## 2.2 Coefficients and Intercept
You can get both the coefficients and the intercept from this model:
```
print('Coefficients: {}'.format(lr.coef_))
print('Intercept: {}'.format(lr.intercept_))
```
## 2.3 Making predictions
We can then make prediction with our model and see how they compare with the actual samples:
```
y_pred = lr.predict(X)
plt.xlim((-10, 10))
plt.ylim((-20, 20))
plt.plot(X, y, 'b.')
plt.plot(X, y_pred, 'r-')
```
## 2.4 Evaluating the model
We can also extract the $R^2$ score of this model:
```
print('R² score: %f' % lr.score(X, y))
```
<br>
<br>
# Bonus examples: Numpy utilities
With linear models, we normally have data that can be represented by either vectors or matrices. Even though you don't need advanced algebra knowledge to implement and understand the models presented, it is useful to understand its basics, since most of the computational part is typically implemented from these concepts.
Numpy is a powerful library that allows us to represent our data easily in this format, and already implements a lot of functions to then manipulate or do calculations over our data. In this section we present the basic functions that you should know and will use the most to implement the basic models:
```
import numpy as np
import pandas as pd
```
## a) Pandas to numpy and back
Pandas stores our data in dataframes and series, which are very useful for visualization and even for some specific data operations we want to perform. However, for many algorithms that involve combination of numeric data, the standard form of implementing is by using numpy. Start by seeing how to convert from pandas to numpy and back:
```
df = pd.read_csv('data/polynomial.csv')
df.head()
```
### a.1) Pandas to numpy
Let's transform our first column into a numpy vector. There are two ways of doing this, either by using the `.values` attribute:
```
np_array = df['x'].values
print(np_array[:10])
```
Or by calling the method `.to_numpy()` :
```
np_array = df['x'].to_numpy()
print(np_array[:10])
```
You can also apply this to the full table:
```
np_array = df.values
print(np_array[:5, :])
np_array = df.to_numpy()
print(np_array[:5, :])
```
### a.2) Numpy to pandas
Let's start by defining an array and converting it to a pandas series:
```
np_array = np.array([4., .1, 1., .23, 3.])
pd_series = pd.Series(np_array)
print(pd_series)
```
We can also create several series and concatenate them to create a dataframe:
```
np_array = np.array([4., .1, 1., .23, 3.])
pd_series_1 = pd.Series(np_array, name='A')
pd_series_2 = pd.Series(2 * np_array, name='B')
pd_dataframe = pd.concat((pd_series_1, pd_series_2), axis=1)
pd_dataframe.head()
```
We can also directly convert to a dataframe:
```
np_array = np.array([[1, 2, 3], [4, 5, 6]])
pd_dataframe = pd.DataFrame(np_array)
pd_dataframe.head()
```
However, we might want more detailed names and specific indices. Some ways of achieving this follows:
```
data = np.array([['','Col1','Col2'],
['Row1',1,2],
['Row2',3,4]])
pd_dataframe = pd.DataFrame(data=data[1:,1:], index=data[1:,0], columns=data[0,1:])
pd_dataframe.head()
pd_dataframe = pd.DataFrame(np.array([[4,5,6,7], [1,2,3,4]]), index=range(0, 2), columns=['A', 'B', 'C', 'D'])
pd_dataframe.head()
my_dict = {'A': np.array(['1', '3']), 'B': np.array(['1', '2']), 'C': np.array(['2', '4'])}
pd_dataframe = pd.DataFrame(my_dict)
pd_dataframe.head()
```
## b) Vector and Matrix initialization and shaping
When working with vectors and matrices, we need to be aware of the dimensions of these objects, and how they affect the possible operations perform over them. Numpy allows you to access these dimensions through the shape of the object:
```
v1 = np.array([ .1, 1., 2.])
print('1-d Array: {}'.format(v1))
print('Shape: {}'.format(v1.shape))
v2 = np.array([[ .1, 1., 2.]])
print('\n')
print('2-d Row Array: {}'.format(v2))
print('Shape: {}'.format(v2.shape))
v3 = np.array([[ .1], [1.], [2.]])
print('\n')
print('2-d Column Array:\n {}'.format(v3))
print('Shape: {}'.format(v3.shape))
m1 = np.array([[ .1, 3., 4., 1.], [1., .3, .1, .5], [2.,.7, 3.8, .1]])
print('\n')
print('2-d matrix:\n {}'.format(m1))
print('Shape: {}'.format(m1.shape))
```
Another important functionality provided is the possibility of reshaping these objects. For example, we can turn a 1-d array into a row vector:
```
v1 = np.array([ .1, 1., 2.])
v1_reshaped = v1.reshape((1, -1))
print('Old 1-d Array reshaped to row: {}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
```
Or we can reshape it into a column vector:
```
v1 = np.array([ .1, 1., 2.])
v1_reshaped = v2.reshape((-1, 1))
print('Old 1-d Array reshaped to column: \n{}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
```
We can also create specific vectors of 1s, 0s or random numbers with specific shapes from the start. See how to use each in the cells that follow:
```
custom_shape = (3, )
v1_ones = np.ones(custom_shape)
print('1-D Vector of ones: \n{}'.format(v1_ones))
print('Shape: {}'.format(v1_ones.shape))
custom_shape = (5, 1)
v1_zeros = np.zeros(custom_shape)
print('2-D vector of zeros: \n{}'.format(v1_zeros))
print('Shape: {}'.format(v1_zeros.shape))
custom_shape = (5, 3)
v1_rand = np.random.rand(custom_shape[0], custom_shape[1])
print('2-D Matrix of random numbers: \n{}'.format(v1_rand))
print('Shape: {}'.format(v1_rand.shape))
```
## c) Vector and Matrix Concatenation
In this section, you will learn how to concatenate 2 vectors, a matrix and a vector, or 2 matrices.
### c.1) Vector - Vector
Let's start by defining 2 vectors:
```
v1 = np.array([ .1, 1., 2.])
v2 = np.array([5.1, .3, .41, 3. ])
print('1st array: {}'.format(v1))
print('Shape: {}'.format(v1.shape))
print('2nd array: {}'.format(v2))
print('Shape: {}'.format(v2.shape))
```
Since vectors only have one dimension with a given size (notice the shape with only one element) we can only concatenate in this dimension, leading to a longer vector:
```
vconcat = np.concatenate((v1, v2))
print('Concatenated vector: {}'.format(vconcat))
print('Shape: {}'.format(vconcat.shape))
```
Concatenating vectors is very easy, and since we can only concatenate them in their one dimension, the sizes do not have to match. Now let's move on to a more complex case.
### c.2) Matrix - row vector
When concatenating matrices and vectors we have to take into account their dimensions.
```
v1 = np.array([ .1, 1., 2., 3.])
m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]])
print('Array: {}'.format(v1))
print('Shape: {}'.format(v1.shape))
print('Matrix: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
```
The first thing you need to know is that whatever numpy objects you are trying to concatenate need to have the same dimensions. Run the code below to verify that you can not concatenate directly the vector and matrix:
```
try:
vconcat = np.concatenate((v1, m1))
except Exception as e:
print('Concatenation raised the following error: {}'.format(e))
```
So how can we do matrix-vector concatenation?
It is actually quite simple. We'll use the reshape functionality you seen before to add a dimension to the vector.
```
v1_reshaped = v1.reshape((1, v1.shape[0]))
m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]])
print('Array: {}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
print('Matrix: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
```
We've reshaped our vector into a 1-row matrix. Now we can try to perform the same concatenation:
```
vconcat = np.concatenate((v1_reshaped, m1))
print('Concatenated vector: {}'.format(vconcat))
print('Shape: {}'.format(vconcat.shape))
```
### c.3) Matrix - column vector
We can also do this procedure with a column vector:
```
v1 = np.array([ .1, 1.])
v1_reshaped = v1.reshape((v1.shape[0], 1))
m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]])
print('Array: \n{}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
print('Matrix: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
vconcat = np.concatenate((v1_reshaped, m1), axis=1)
print('Concatenated vector: {}'.format(vconcat))
print('Shape: {}'.format(vconcat.shape))
```
There's yet another restriction when concatenating vectors and matrices. The dimension where we want to concatenate has to share the same size.
See what would happen if we tried to concatenate a smaller vector with the same matrix:
```
v2 = np.array([ .1, 1.])
v2_reshaped = v2.reshape((1, v2.shape[0])) # Row vector as matrix
try:
vconcat = np.concatenate((v2, m1))
except Exception as e:
print('Concatenation raised the following error: {}'.format(e))
```
### c.4) Matrix - Matrix
This is just an extension of the previous case, since what we did before was transforming the vector into a matrix where the size of one of the dimensions is 1. So all the same restrictions apply, the arrays must have compatible dimensions. Run the following examples to see this:
```
m1 = np.array([[5.1, .3, .41, 3. ], [5.1, .3, .41, 3. ]])
m2 = np.array([[1., 2., 0., 3. ], [.1, .13, 1., 3. ], [.1, 2., .5, .3 ]])
m3 = np.array([[1., 0. ], [0., 1. ]])
print('Matrix 1: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
print('Matrix 2: \n{}'.format(m2))
print('Shape: {}'.format(m2.shape))
print('Matrix 3: \n{}'.format(m3))
print('Shape: {}'.format(m3.shape))
```
Concatenate m1 and m2 at row level (stack the two matrices):
```
mconcat = np.concatenate((m1, m2))
print('Concatenated matrix:\n {}'.format(mconcat))
print('Shape: {}'.format(mconcat.shape))
```
Concatenate m1 and m2 at column level (joining the two matrices side by side) should produce an error:
```
try:
vconcat = np.concatenate((m1, m2), axis=1)
except Exception as e:
print('Concatenation raised the following error: {}'.format(e))
```
Concatenate m1 and m3 at column level (joining the two matrices side by side):
```
mconcat = np.concatenate((m1, m3), axis=1)
print('Concatenated matrix:\n {}'.format(mconcat))
print('Shape: {}'.format(mconcat.shape))
```
Concatenate m1 and m3 at row level (stack the two matrices) should produce an error:
```
try:
vconcat = np.concatenate((m1, m3))
except Exception as e:
print('Concatenation raised the following error: {}'.format(e))
```
## d) Single matrix operations
In this section we describe a few operations that can be done over matrices:
### d.1) Transpose
A very common operation is the transpose. If you are used to see matrix notation, you should know what this operation is. Take a matrix with 2 dimensions:
$$ X = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} $$
Transposing the matrix is inverting its data with respect to its diagonal:
$$ X^T = \begin{bmatrix} a & c \\ b & d \\ \end{bmatrix} $$
This means that the rows of X will become its columns and vice-versa. You can attain the transpose of a matrix by using either `.T` on a matrix or calling `numpy.transpose`:
```
m1 = np.array([[ .1, 1., 2.], [ 3., .24, 4.], [ 6., 2., 5.]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.transpose()
print('Transposed matrix with `transpose` \n{}'.format(m1_transposed))
m1_transposed = m1.T
print('Transposed matrix with `T` \n{}'.format(m1_transposed))
```
A few examples of non-squared matrices. In these, you'll see that the shape (a, b) gets inverted to (b, a):
```
m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.T
print('Transposed matrix: \n{}'.format(m1_transposed))
m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.T
print('Transposed matrix: \n{}'.format(m1_transposed))
```
For vectors represented as matrices, this means transforming from a row vector (1, N) to a column vector (N, 1) or vice-versa:
```
v1 = np.array([ .1, 1., 2.])
v1_reshaped = v1.reshape((1, -1))
print('Row vector as 2-d array: {}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
v1_transposed = v1_reshaped.T
print('Transposed (column vector as 2-d array): \n{}'.format(v1_transposed))
print('Shape: {}'.format(v1_transposed.shape))
v1 = np.array([ 3., .23, 2., .6])
v1_reshaped = v1.reshape((-1, 1))
print('Column vector as 2-d array: \n{}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
v1_transposed = v1_reshaped.T
print('Transposed (row vector as 2-d array): {}'.format(v1_transposed))
print('Shape: {}'.format(v1_transposed.shape))
```
### d.2) Statistics operators
Numpy also allows us to perform several operations over the rows and columns of a matrix, such as:
* Sum
* Mean
* Max
* Min
* ...
The most important thing to take into account when using these is to know exactly in which direction we are performing the operations. We can perform, for example, a `max` operation over the whole matrix, obtaining the max value in all of the matrix values. Or we might want this value for each row, or for each column. Check the following examples:
```
m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1))
```
Operating over all matrix' values:
```
print('Total sum of matrix elements: {}'.format(m1.sum()))
print('Minimum of all matrix elements: {}'.format(m1.max()))
print('Maximum of all matrix elements: {}'.format(m1.min()))
print('Mean of all matrix elements: {}'.format(m1.mean()))
```
Operating across rows - produces a row with the sum/max/min/mean for each column:
```
print('Total sum of matrix elements: {}'.format(m1.sum(axis=0)))
print('Minimum of all matrix elements: {}'.format(m1.max(axis=0)))
print('Maximum of all matrix elements: {}'.format(m1.min(axis=0)))
print('Mean of all matrix elements: {}'.format(m1.mean(axis=0)))
```
Operating across columns - produces a column with the sum/max/min/mean for each row:
```
print('Total sum of matrix elements: {}'.format(m1.sum(axis=1)))
print('Minimum of all matrix elements: {}'.format(m1.max(axis=1)))
print('Maximum of all matrix elements: {}'.format(m1.min(axis=1)))
print('Mean of all matrix elements: {}'.format(m1.mean(axis=1)))
```
As an example, imagine that you have a matrix of shape (n_samples, n_features), where each row represents all the features for one sample. Then , to average over the samples, we do:
```
m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1))
print('\n')
print('Sample 1: {}'.format(m1[0, :]))
print('Sample 2: {}'.format(m1[1, :]))
print('Sample 3: {}'.format(m1[2, :]))
print('Sample 4: {}'.format(m1[3, :]))
print('\n')
print('Average over samples: \n{}'.format(m1.mean(axis=0)))
```
Other statistical functions behave in a similar manner, so it is important to know how to work the axis of these objects.
## e) Multiple matrix operations
### e.1) Element wise operations
Several operations available work at the element level, this is, if we have two matrices A and B:
$$ A = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} $$
and
$$ B = \begin{bmatrix} e & f \\ g & h \\ \end{bmatrix} $$
an element-wise operation produces a matrix:
$$ Op(A, B) = \begin{bmatrix} Op(a,e) & Op(b,f) \\ Op(c,g) & Op(d,h) \\ \end{bmatrix} $$
You can perform sum and difference, but also element-wise multiplication and division. These are implemented with the regular operators `+`, `-`, `*`, `/`. Check out the examples below:
```
m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
m2 = np.array([[ .1, 4., .25, .1], [ 2., 1.5, .42, -1.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Sum: \n{}'.format(m1 + m2))
print('\n')
print('Difference: \n{}'.format(m1 - m2))
print('\n')
print('Multiplication: \n{}'.format(m1*m2))
print('\n')
print('Division: \n{}'.format(m1/m2))
```
For these operations, ideally your matrices should have the same dimensions. An exception to this is when you have one of the elements that can be [broadcasted](https://numpy.org/doc/stable/user/basics.broadcasting.html) over the other. However we won't cover that in these examples.
### e.2) Matrix multiplication
Although you've seen how to perform element wise multiplication with the basic operation, one of the most common matrix operations is matrix multiplication, where the output is not the result of an element wise combination of its elements, but actually a linear combination between rows of the first matrix nd columns of the second.
In other words, element (i, j) of the resulting matrix is the dot product between row i of the first matrix and column j of the second:

Where the dot product represented breaks down to:
$$ 58 = 1 \times 7 + 2 \times 9 + 3 \times 11 $$
Numpy already provides this function, so check out the following examples:
```
m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
m2 = np.array([[ .1, 4.], [.25, .1], [ 2., 1.5], [.42, -1.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Matrix multiplication: \n{}'.format(np.matmul(m1, m2)))
m1 = np.array([[ .1, 4.], [.25, .1], [ 2., 1.5], [.42, -1.]])
m2 = np.array([[ .1, 1., 2.], [ 3., .24, 4.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Matrix multiplication: \n{}'.format(np.matmul(m1, m2)))
```
Notice that in both operations the matrix multiplication of shapes `(k, l)` and `(m, n)` yields a matrix of dimensions `(k, n)`. Additionally, for this operation to be possible, the inner dimensions need to match, this is `l == m` . See what happens if we try to multiply matrices with incompatible dimensions:
```
m1 = np.array([[ .1, 4., 3.], [.25, .1, 1.], [ 2., 1.5, .5], [.42, -1., 4.3]])
m2 = np.array([[ .1, 1., 2.], [ 3., .24, 4.]])
print('Matrix 1: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
print('Matrix 2: \n{}'.format(m1))
print('Shape: {}'.format(m2.shape))
print('\n')
try:
m3 = np.matmul(m1, m2)
except Exception as e:
print('Matrix multiplication raised the following error: {}'.format(e))
```
| github_jupyter |
```
#importing libraries
import pandas as pd
import boto3
import json
import configparser
from botocore.exceptions import ClientError
import psycopg2
def config_parse_file():
"""
Parse the dwh.cfg configuration file
:return:
"""
global KEY, SECRET, DWH_CLUSTER_TYPE, DWH_NUM_NODES, \
DWH_NODE_TYPE, DWH_CLUSTER_IDENTIFIER, DWH_DB, \
DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT, DWH_IAM_ROLE_NAME
print("Parsing the config file...")
config = configparser.ConfigParser()
with open('dwh.cfg') as configfile:
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY = config.get('AWS','KEY')
SECRET = config.get('AWS','SECRET')
DWH_CLUSTER_TYPE = config.get("DWH","DWH_CLUSTER_TYPE")
DWH_NUM_NODES = config.get("DWH","DWH_NUM_NODES")
DWH_NODE_TYPE = config.get("DWH","DWH_NODE_TYPE")
DWH_CLUSTER_IDENTIFIER = config.get("DWH","DWH_CLUSTER_IDENTIFIER")
DWH_DB = config.get("CLUSTER","DWH_DB")
DWH_DB_USER = config.get("CLUSTER","DWH_DB_USER")
DWH_DB_PASSWORD = config.get("CLUSTER","DWH_DB_PASSWORD")
DWH_PORT = config.get("CLUSTER","DWH_PORT")
DWH_IAM_ROLE_NAME = config.get("DWH", "DWH_IAM_ROLE_NAME")
#Function for creating iam_role
def create_iam_role(iam):
"""
Create the AWS IAM role
:param iam:
:return:
"""
global DWH_IAM_ROLE_NAME
dwhRole = None
try:
print('1.1 Creating a new IAM Role')
dwhRole = iam.create_role(
Path='/',
RoleName=DWH_IAM_ROLE_NAME,
Description="Allows Redshift clusters to call AWS services on your behalf.",
AssumeRolePolicyDocument=json.dumps(
{'Statement': [{'Action': 'sts:AssumeRole',
'Effect': 'Allow',
'Principal': {'Service': 'redshift.amazonaws.com'}}],
'Version': '2012-10-17'})
)
except Exception as e:
print(e)
dwhRole = iam.get_role(RoleName=DWH_IAM_ROLE_NAME)
return dwhRole
def attach_iam_role_policy(iam):
"""
Attach the AmazonS3ReadOnlyAccess role policy to the created IAM
:param iam:
:return:
"""
global DWH_IAM_ROLE_NAME
print('1.2 Attaching Policy')
return iam.attach_role_policy(RoleName=DWH_IAM_ROLE_NAME, PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess")['ResponseMetadata']['HTTPStatusCode'] == 200
def get_iam_role_arn(iam):
"""
Get the IAM role ARN string
:param iam: The IAM resource client
:return:string
"""
global DWH_IAM_ROLE_NAME
return iam.get_role(RoleName=DWH_IAM_ROLE_NAME)['Role']['Arn']
#Function to create cluster
def create_cluster(redshift, roleArn):
"""
Start the Redshift cluster creation
:param redshift: The redshift resource client
:param roleArn: The created role ARN
:return:
"""
global DWH_CLUSTER_TYPE, DWH_NODE_TYPE, DWH_NUM_NODES, DWH_DB, DWH_CLUSTER_IDENTIFIER, DWH_DB_USER, DWH_DB_PASSWORD
try:
response = redshift.create_cluster(
#HW
ClusterType=DWH_CLUSTER_TYPE,
NodeType=DWH_NODE_TYPE,
NumberOfNodes=int(DWH_NUM_NODES),
#Identifiers & Credentials
DBName=DWH_DB,
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER,
MasterUsername=DWH_DB_USER,
MasterUserPassword=DWH_DB_PASSWORD,
#Roles (for s3 access)
IamRoles=[roleArn]
)
print("Redshift cluster creation http response status code: ")
print(response['ResponseMetadata']['HTTPStatusCode'])
return response['ResponseMetadata']['HTTPStatusCode'] == 200
except Exception as e:
print(e)
return False
#Adding details to config file
def config_persist_cluster_infos(redshift):
"""
Write back to the dwh.cfg configuration file the cluster endpoint and IAM ARN
:param redshift: The redshift resource client
:return:
"""
global DWH_CLUSTER_IDENTIFIER
print("Writing the cluster address and IamRoleArn to the config file...")
cluster_props = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
config = configparser.ConfigParser()
with open('dwh.cfg') as configfile:
config.read_file(configfile)
config.set("CLUSTER", "HOST", cluster_props['Endpoint']['Address'])
config.set("IAM_ROLE", "ARN", cluster_props['IamRoles'][0]['IamRoleArn'])
with open('dwh.cfg', 'w+') as configfile:
config.write(configfile)
config_parse_file()
#Function to retrive redshift cluster properties
def prettyRedshiftProps(props):
'''
Retrieve Redshift clusters properties
'''
pd.set_option('display.max_colwidth', -1)
keysToShow = ["ClusterIdentifier", "NodeType", "ClusterStatus", "MasterUsername", "DBName", "Endpoint", "NumberOfNodes", 'VpcId']
x = [(k, v) for k,v in props.items() if k in keysToShow]
return pd.DataFrame(data=x, columns=["Key", "Value"])
#Function to get cluster properties
def get_cluster_props(redshift):
"""
Retrieves the Redshift cluster status
:param redshift: The Redshift resource client
:return: The cluster status
"""
global DWH_CLUSTER_IDENTIFIER
myClusterProps = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
cluster_status = myClusterProps['ClusterStatus']
return cluster_status.lower()
#to check if cluster became available or not
def check_cluster_creation(redshift):
"""
Check if the cluster status is available, if it is returns True. Otherwise, false.
:param redshift: The Redshift client resource
:return:bool
"""
if get_cluster_props(redshift) == 'available':
return True
return False
#Function to Open an incoming TCP port to access the cluster ednpoint
def aws_open_redshift_port(ec2, redshift):
"""
Opens the Redshift port on the VPC security group.
:param ec2: The EC2 client resource
:param redshift: The Redshift client resource
:return:None
"""
global DWH_CLUSTER_IDENTIFIER, DWH_PORT
cluster_props = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
try:
vpc = ec2.Vpc(id=cluster_props['VpcId'])
all_security_groups = list(vpc.security_groups.all())
print(all_security_groups)
defaultSg = all_security_groups[1]
print(defaultSg)
defaultSg.authorize_ingress(
GroupName=defaultSg.group_name,
CidrIp='0.0.0.0/0',
IpProtocol='TCP',
FromPort=int(DWH_PORT),
ToPort=int(DWH_PORT)
)
except Exception as e:
print(e)
##Create clients for IAM, EC2, S3 and Redshift¶
def aws_resource(name, region):
"""
Creates an AWS client resource
:param name: The name of the resource
:param region: The region of the resource
:return:
"""
global KEY, SECRET
return boto3.resource(name, region_name=region, aws_access_key_id=KEY, aws_secret_access_key=SECRET)
def aws_client(service, region):
"""
Creates an AWS client
:param service: The service
:param region: The region of the service
:return:
"""
global KEY, SECRET
return boto3.client(service, aws_access_key_id=KEY, aws_secret_access_key=SECRET, region_name=region)
#delete resources
def delete_cluster_resources(redshift):
"""
Destroy the Redshift cluster (request deletion)
:param redshift: The Redshift client resource
:return:None
"""
global DWH_CLUSTER_IDENTIFIER
redshift.delete_cluster( ClusterIdentifier=DWH_CLUSTER_IDENTIFIER, SkipFinalClusterSnapshot=True)
def delete_iam_resource(iam):
iam.detach_role_policy(RoleName=DWH_IAM_ROLE_NAME, PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess")
iam.delete_role(RoleName=DWH_IAM_ROLE_NAME)
#Main Function to start the process
def main():
config_parse_file()
# ec2 = aws_resource('ec2', 'us-east-2')
# s3 = aws_resource('s3', 'us-west-2')
iam = aws_client('iam', "us-east-1")
redshift = aws_client('redshift', "us-east-1")
create_iam_role(iam)
attach_iam_role_policy(iam)
roleArn = get_iam_role_arn(iam)
clusterCreationStarted = create_cluster(redshift, roleArn)
if clusterCreationStarted:
print("The cluster is being created.")
# if __name__ == '__main__':
# main()
```
| github_jupyter |
```
import os, sys
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
sys.path.append('../')
import argparse, json
from tqdm import tqdm_notebook as tqdm
import os.path as osp
from data.pointcloud_dataset import load_one_class_under_folder
from utils.dirs import mkdir_and_rename
from utils.tf import reset_tf_graph
opt = {
'data': {
'data_root':
'/orion/u/jiangthu/projects/latent_3d_points/data/shape_net_core_uniform_samples_2048',
'class_name': 'airplane',
'n_thread': 20
},
'model': {
'type': 'wgan',
'num_points': 2048,
'noise_dim': 128,
'noise_params': {
'mu': 0,
'sigma': 0.2
}
},
'train': {
'batch_size': 50,
'learning_rate': 0.0001,
'beta': 0.5,
'z_rotate': False,
'saver_step': 100
},
'path': {
'train_root': './experiments',
'experiment_name': 'single_class_gan_chair_noise128'
}
}
train_dir = osp.join(opt['path']['train_root'], opt['path']['experiment_name'])
train_opt = opt['train']
import numpy as np
import tensorflow as tf
from utils.tf import leaky_relu
from utils.tf import expand_scope_by_name
from tflearn.layers.normalization import batch_normalization
from tflearn.layers.core import fully_connected, dropout
from tflearn.layers.conv import conv_1d
from utils.tf import expand_scope_by_name, replicate_parameter_for_all_layers
import tflearn
def encoder_with_convs_and_symmetry(in_signal,
init_list,
n_filters=[64, 128, 256, 1024],
filter_sizes=[1],
strides=[1],
non_linearity=tf.nn.relu,
weight_decay=0.001,
symmetry=tf.reduce_max,
regularizer=None,
scope=None,
reuse=False,
padding='same',
verbose=False,
conv_op=conv_1d):
'''An Encoder (recognition network), which maps inputs onto a latent space.
'''
if verbose:
print('Building Encoder')
n_layers = len(n_filters)
filter_sizes = replicate_parameter_for_all_layers(filter_sizes, n_layers)
strides = replicate_parameter_for_all_layers(strides, n_layers)
if n_layers < 2:
raise ValueError('More than 1 layers are expected.')
for i in range(n_layers):
if i == 0:
layer = in_signal
name = 'encoder_conv_layer_' + str(i)
scope_i = expand_scope_by_name(scope, name)
layer = conv_op(layer,
nb_filter=n_filters[i],
filter_size=filter_sizes[i],
strides=strides[i],
regularizer=regularizer,
weight_decay=weight_decay,
name=name,
reuse=reuse,
scope=scope_i,
padding=padding,
weights_init=tf.constant_initializer(init_list[i][0]),
bias_init=tf.constant_initializer(init_list[i][1]))
if non_linearity is not None:
layer = non_linearity(layer)
if verbose:
print(layer)
print('output size:', np.prod(layer.get_shape().as_list()[1:]),
'\n')
if symmetry is not None:
layer = symmetry(layer, axis=1)
if verbose:
print(layer)
return layer
def decoder_with_fc_only(latent_signal,
init_list,
layer_sizes=[],
non_linearity=tf.nn.relu,
regularizer=None,
weight_decay=0.001,
reuse=False,
scope=None,
verbose=False):
'''A decoding network which maps points from the latent space back onto the data space.
'''
if verbose:
print('Building Decoder')
n_layers = len(layer_sizes)
if n_layers < 2:
raise ValueError(
'For an FC decoder with single a layer use simpler code.')
for i in range(0, n_layers - 1):
name = 'decoder_fc_' + str(i)
scope_i = expand_scope_by_name(scope, name)
if i == 0:
layer = latent_signal
layer = fully_connected(
layer,
layer_sizes[i],
activation='linear',
weights_init=tf.constant_initializer(init_list[i][0]),
bias_init=tf.constant_initializer(init_list[i][1]),
name=name,
regularizer=regularizer,
weight_decay=weight_decay,
reuse=reuse,
scope=scope_i)
if verbose:
print(name,
'FC params = ',
np.prod(layer.W.get_shape().as_list()) +
np.prod(layer.b.get_shape().as_list()),
end=' ')
if non_linearity is not None:
layer = non_linearity(layer)
if verbose:
print(layer)
print('output size:', np.prod(layer.get_shape().as_list()[1:]),
'\n')
# Last decoding layer never has a non-linearity.
name = 'decoder_fc_' + str(n_layers - 1)
scope_i = expand_scope_by_name(scope, name)
layer = fully_connected(layer,
layer_sizes[n_layers - 1],
activation='linear',
weights_init=tf.constant_initializer(init_list[-1][0]),
bias_init=tf.constant_initializer(init_list[-1][1]),
name=name,
regularizer=regularizer,
weight_decay=weight_decay,
reuse=reuse,
scope=scope_i)
if verbose:
print(name,
'FC params = ',
np.prod(layer.W.get_shape().as_list()) +
np.prod(layer.b.get_shape().as_list()),
end=' ')
if verbose:
print(layer)
print('output size:', np.prod(layer.get_shape().as_list()[1:]), '\n')
return layer
def mlp_discriminator(in_signal,
cov_init_list,
fc_init_list,
non_linearity=tf.nn.relu,
reuse=False,
scope=None):
''' used in nips submission.
'''
encoder_args = {
'n_filters': [64, 128, 256, 256, 512],
'filter_sizes': [1, 1, 1, 1, 1],
'strides': [1, 1, 1, 1, 1]
}
encoder_args['reuse'] = reuse
encoder_args['scope'] = scope
encoder_args['non_linearity'] = non_linearity
layer = encoder_with_convs_and_symmetry(in_signal, cov_init_list, weight_decay=0.0,
**encoder_args)
name = 'decoding_logits'
scope_e = expand_scope_by_name(scope, name)
d_logit = decoder_with_fc_only(layer,
fc_init_list,
layer_sizes=[128, 64, 1],
reuse=reuse,
scope=scope_e,
weight_decay=0.0)
d_prob = tf.nn.sigmoid(d_logit)
return d_prob, d_logit
def point_cloud_generator(z,
pc_dims,
init_list,
layer_sizes=[64, 128, 512, 1024],
non_linearity=tf.nn.relu):
''' used in nips submission.
'''
n_points, dummy = pc_dims
if (dummy != 3):
raise ValueError()
out_signal = decoder_with_fc_only(z,
init_list[:-1],
layer_sizes=layer_sizes,
non_linearity=non_linearity, weight_decay=0.0)
out_signal = non_linearity(out_signal)
out_signal = fully_connected(out_signal,
np.prod([n_points, 3]),
activation='linear',
weights_init=tf.constant_initializer(init_list[-1][0]),
bias_init=tf.constant_initializer(init_list[-1][1]),
weight_decay=0.0)
out_signal = tf.reshape(out_signal, [-1, n_points, 3])
return out_signal
from trainers.gan import GAN
from tflearn import is_training
class PGAN(GAN):
'''Gradient Penalty.
https://arxiv.org/abs/1704.00028
'''
def __init__(self, name, learning_rate, lam, n_output, noise_dim, discriminator, generator, beta=0.5, gen_kwargs={}, disc_kwargs={}, graph=None):
GAN.__init__(self, name, graph)
self.noise_dim = noise_dim
self.n_output = n_output
self.discriminator = discriminator
self.generator = generator
with tf.variable_scope(name):
self.noise = tf.placeholder(tf.float32, shape=[None, noise_dim]) # Noise vector.
self.real_pc = tf.placeholder(tf.float32, shape=[None] + self.n_output) # Ground-truth.
with tf.variable_scope('generator'):
self.generator_out = self.generator(self.noise, self.n_output, **gen_kwargs)
with tf.variable_scope('discriminator') as scope:
self.real_prob, self.real_logit = self.discriminator(self.real_pc, scope=scope, **disc_kwargs)
self.synthetic_prob, self.synthetic_logit = self.discriminator(self.generator_out, reuse=True, scope=scope, **disc_kwargs)
# Compute WGAN losses
self.loss_d_logit = tf.reduce_mean(self.synthetic_logit) - tf.reduce_mean(self.real_logit)
self.loss_g = -tf.reduce_mean(self.synthetic_logit)
# # Compute gradient penalty at interpolated points
# ndims = self.real_pc.get_shape().ndims
# batch_size = tf.shape(self.real_pc)[0]
# alpha = 0.5
# differences = self.generator_out - self.real_pc
# interpolates = self.real_pc + (alpha * differences)
# with tf.variable_scope('discriminator') as scope:
# gradients = tf.gradients(self.discriminator(interpolates, reuse=True, scope=scope, **disc_kwargs)[1], [interpolates])[0]
# # Reduce over all but the first dimension
# slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=list(range(1, ndims))))
# self.gradient_penalty = tf.reduce_mean((slopes - 1.) ** 2)
# self.loss_d = self.loss_d_logit + lam * self.gradient_penalty
self.loss_d = self.loss_d_logit
train_vars = tf.trainable_variables()
d_params = [v for v in train_vars if v.name.startswith(name + '/discriminator/')]
g_params = [v for v in train_vars if v.name.startswith(name + '/generator/')]
self.opt_d = self.optimizer(learning_rate, beta, self.loss_d, d_params)
self.opt_g = self.optimizer(learning_rate, beta, self.loss_g, g_params)
# self.optimizer_d = tf.train.AdamOptimizer(learning_rate, beta1=beta)
# self.opt_d = self.optimizer_d.minimize(self.loss_d, var_list=d_params)
# self.optimizer_g = tf.train.AdamOptimizer(learning_rate, beta1=beta)
# self.opt_g = self.optimizer_g.minimize(self.loss_g, var_list=g_params)
self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=None)
self.init = tf.global_variables_initializer()
# Launch the session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
self.sess = tf.Session(config=config)
self.sess.run(self.init)
# model
discriminator = mlp_discriminator
generator = point_cloud_generator
np.random.seed(0)
g_fc_channel = [128, 64, 128, 512, 1024, 6144]
d_cov_channel = [3, 64, 128, 256, 256, 512]
d_fc_channel = [512, 128, 64, 1]
g_fc_weight = []
for i in range(len(g_fc_channel) - 1):
in_c = g_fc_channel[i]
out_c = g_fc_channel[i + 1]
g_fc_weight.append(
(np.random.rand(in_c, out_c).astype(np.float32) * 0.1 - 0.05,
np.random.rand(out_c).astype(np.float32) * 0.1 - 0.05))
d_cov_weight = []
for i in range(len(d_cov_channel) - 1):
in_c = d_cov_channel[i]
out_c = d_cov_channel[i + 1]
d_cov_weight.append((np.random.rand(in_c, out_c).astype(np.float32) * 0.1 - 0.05,
np.random.rand(out_c).astype(np.float32) * 0.1 - 0.05))
d_fc_weight = []
for i in range(len(d_fc_channel) - 1):
in_c = d_fc_channel[i]
out_c = d_fc_channel[i + 1]
d_fc_weight.append((np.random.rand(in_c, out_c).astype(np.float32) * 0.1 - 0.05,
np.random.rand(out_c).astype(np.float32) * 0.1 - 0.05))
input_noise = [np.random.rand(4, 128).astype(np.float32) * 0.1 - 0.05 for _ in range(10)]
target_points = [
np.random.rand(4, 2048, 3).astype(np.float32) * 0.1 - 0.05 for _ in range(10)
]
reset_tf_graph()
tf.random.set_random_seed(0)
model_opt = opt['model']
if model_opt['type'] == 'wgan':
lam = 10
disc_kwargs = {'cov_init_list': d_cov_weight, 'fc_init_list': d_fc_weight}
gen_kwargs = {'init_list': g_fc_weight}
gan = PGAN(model_opt['type'],
train_opt['learning_rate'],
lam, [model_opt['num_points'], 3],
model_opt['noise_dim'],
discriminator,
generator,
disc_kwargs=disc_kwargs,
gen_kwargs=gen_kwargs,
beta=train_opt['beta'])
for i in range(10):
feed_dict = {gan.real_pc: target_points[i], gan.noise: input_noise[i]}
_, loss_d = gan.sess.run([gan.opt_d, gan.loss_d], feed_dict=feed_dict)
feed_dict = {gan.noise: input_noise[i]}
_, loss_g = gan.sess.run([gan.opt_g, gan.loss_g], feed_dict=feed_dict)
print(loss_d, loss_g)
for i in range(10):
feed_dict = {gan.real_pc: target_points[i], gan.noise: input_noise[i]}
_, loss_d = gan.sess.run([gan.opt_d, gan.loss_d], feed_dict=feed_dict)
feed_dict = {gan.noise: input_noise[i]}
_, loss_g = gan.sess.run([gan.opt_g, gan.loss_g], feed_dict=feed_dict)
print(loss_d, loss_g)
i = 0
feed_dict = {gan.real_pc: target_points[i], gan.noise: input_noise[i]}
_, loss_d, loss_d_logit, gradient_penalty = gan.sess.run(
[gan.opt_d, gan.loss_d, gan.loss_d_logit, gan.gradient_penalty],
feed_dict=feed_dict)
float(loss_d), float(gradient_penalty), float(loss_d_logit)
gen_var = gan.sess.run(tf.trainable_variables('wgan/dis'))
for v in gen_var:
print(v.reshape(-1)[0])
# reset_tf_graph()
# np.random.seed(0)
# w = np.random.rand(3, 4).astype(np.float32)
# b = np.random.rand(4).astype(np.float32)
# in_f = np.random.rand(2, 3).astype(np.float32)
# in_feat = tf.placeholder(tf.float32, [None, 3])
# out = fully_connected(in_feat,
# 4,
# weights_init=tf.constant_initializer(w),
# bias_init=tf.constant_initializer(b))
# with tf.Session() as sess:
# sess.run(tf.global_variables_initializer())
# res = sess.run([out], feed_dict = {in_feat: in_f})
# print(res[0])
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import os
import scipy.ndimage as ndi
import skimage.filters as fl
import warnings
from numpy import uint8, int64, float64, array, arange, zeros, zeros_like, ones, mean
from numpy.fft import fft, fft2, ifft, ifft2, fftshift
from math import log2
from scipy.ndimage import convolve, correlate, uniform_filter, gaussian_laplace, gaussian_filter, generic_filter, minimum_filter, maximum_filter, median_filter, rank_filter, \
binary_fill_holes, binary_dilation, binary_erosion, binary_opening, binary_closing
from scipy.signal import wiener
from skimage import io, data
from skimage.color import rgb2gray
from skimage.draw import polygon
from skimage.exposure import adjust_gamma, equalize_hist, rescale_intensity
from skimage.feature import canny
from skimage.filters import threshold_otsu, threshold_isodata, prewitt_h, prewitt_v, prewitt, roberts, sobel_h, sobel_v, sobel, laplace
from skimage.io import imshow
from skimage.measure import label
from skimage.morphology import dilation, erosion, opening, closing, square
from skimage.transform import rescale
from skimage.util import img_as_ubyte, img_as_float, img_as_bool, random_noise
from IPython.core.interactiveshell import InteractiveShell
warnings.filterwarnings('ignore')
InteractiveShell.ast_node_interactivity = "all"
```
## numpy
```
def add(image, c):
return uint8(np.clip(float64(image) + c, 0, 255))
```
## matplotlib
```
def matplot(img, title=None, cmap=None, figsize=None):
col = len(img)
if figsize is None:
plt.figure(figsize=(col * 4, col * 4))
else:
plt.figure(figsize=figsize)
for i, j in enumerate(img):
plt.subplot(1, col, i + 1)
plt.axis("off")
if title != None:
plt.title(title[i])
if cmap != None and cmap[i] != "":
plt.imshow(j, cmap=cmap[i])
else:
imshow(j)
```
## Chapter 2
```
def imread(fname):
return io.imread(os.path.join("/home/nbuser/library/", "Image", "read", fname))
def imsave(fname, image):
io.imsave(os.path.join("/home/nbuser/library/", "Image", "save", fname), image)
```
## Chapter 3
```
def spatial_resolution(image, scale):
return rescale(rescale(image, 1 / scale), scale, order=0)
def grayslice(image, n):
image = img_as_ubyte(image)
v = 256 // n
return image // v * v
```
## Chapter 4
```
def imhist(image, equal=False):
if equal:
image = img_as_ubyte(equalize_hist(image))
f = plt.figure()
f.show(plt.hist(image.flatten(), bins=256))
```
## Chapter 5
```
def unsharp(alpha=0.2):
A1 = array([[-1, 1, -1],
[1, 1, 1],
[-1, 1, -1]], dtype=float64)
A2 = array([[0, -1, 0],
[-1, 5, -1],
[0, -1, 0]], dtype=float64)
return (alpha * A1 + A2) / (alpha + 1)
```
## Chapter 6
```
ne = array([[1, 1, 0], [1, 1, 0], [0, 0, 0]])
bi = array([[1, 2, 1], [2, 4, 2], [1, 2, 1]]) / 4
bc = array([[1, 4, 6, 4, 1],
[4, 16, 24, 16, 4],
[6, 24, 35, 24, 6],
[4, 16, 24, 16, 4],
[1, 4, 6, 4, 1]]) / 64
def zeroint(img):
r, c = img.shape
res = zeros((r*2, c*2))
res[::2, ::2] = img
return res
def spatial_filtering(img, p, filt):
for i in range(int(log2(p))):
img_zi = zeroint(img)
img_sf = correlate(img_zi, filt, mode="reflect")
return img_sf
```
## Chapter 7
```
def fftformat(F):
for f in F:
print("%8.4f %+.4fi" % (f.real, f.imag))
def fftshow(f, type="log"):
if type == "log":
return rescale_intensity(np.log(1 + abs(f)), out_range=(0, 1))
elif type == "abs":
return rescale_intensity(abs(f), out_range=(0, 1))
def circle_mask(img, type, lh, D=15, n=2, sigma=10):
r, c = img.shape
arr = arange(-r / 2, r / 2)
arc = arange(-c / 2, c / 2)
x, y = np.meshgrid(arr, arc)
if type == "ideal":
if lh == "low":
return x**2 + y**2 < D**2
elif lh == "high":
return x**2 + y**2 > D**2
elif type == "butterworth":
if lh == "low":
return 1 / (1 + (np.sqrt(2) - 1) * ((x**2 + y**2) / D**2)**n)
elif lh == "high":
return 1 / (1 + (D**2 / (x**2 + y**2))**n)
elif type == "gaussian":
g = np.exp(-(x**2 + y**2) / sigma**2)
if lh == "low":
return g / g.max()
elif lh == "high":
return 1 - g / g.max()
def fft_filter(img, type, lh, D=15, n=2, sigma=10):
f = fftshift(fft2(img))
c = circle_mask(img, type, lh, D, n, sigma)
fc = f * c
return fftshow(f), c, fftshow(fc), fftshow(ifft2(fc), "abs")
```
## Chapter 8
```
def periodic_noise(img, s=None):
if "numpy" not in str(type(s)):
r, c = img.shape
x, y = np.mgrid[0:r, 0:c].astype(float64)
s = np.sin(x / 3 + y / 3) + 1
return (2 * img_as_float(img) + s / 2) / 3
def outlier_filter(img, D=0.5):
av = array([[1, 1, 1],
[1, 0, 1],
[1, 1, 1]]) / 8
img_av = convolve(img, av)
r = abs(img - img_av) > D
return r * img_av + (1 - r) * img
def image_average(img, n):
x, y = img.shape
t = zeros((x, y, n))
for i in range(n):
t[:, :, i] = random_noise(img, "gaussian")
return np.mean(t, 2)
def pseudo_median(x):
MAXMIN = 0
MINMAX = 255
for i in range(len(x) - 2):
MAXMIN = max(MAXMIN, min(x[i:i+3]))
MINMAX = min(MINMAX, max(x[i:i+3]))
return 0.5 * (MAXMIN + MINMAX)
def periodic_filter(img, type="band", k=1):
r, c = img.shape
x_mid, y_mid = r // 2, c // 2
f = fftshift(fft2(img))
f2 = img_as_ubyte(fftshow(f, "abs"))
f2[x_mid, y_mid] = 0
x, y = np.where(f2 == f2.max())
d = np.sqrt((x[0] - x_mid)**2 + (y[0] - y_mid)**2)
if type == "band":
x, y = np.meshgrid(arange(0, r), arange(0, c))
z = np.sqrt((x - x_mid)**2 + (y - y_mid)**2)
br = (z < np.floor(d - k)) | (z > np.ceil(d + k))
fc = f * br
elif type == "criss":
fc = np.copy(f)
fc[x, :] = 0
fc[:, y] = 0
fci = ifft2(fc)
return fftshow(f), fftshow(fc), fftshow(fci, "abs")
def fft_inverse(img, c, type="low", D2=15, n2=2, d=0.01):
f = fftshift(fft2(img_as_ubyte(img)))
if type == "low":
c2 = circle_mask(img, "butterworth", "low", D2, n2, 10)
fb = f / c * c2
elif type == "con":
c2 = np.copy(c)
c2[np.where(c2 < d)] = 1
fb = f / c2
return c2, fftshow(ifft2(fb), "abs")
def deblur(img, m, type="con",d=0.02):
m2 = zeros_like(img, dtype=float64)
r, c = m.shape
m2[0:r, 0:c] = m
mf = fft2(m2)
if type == "div":
bmi = ifft2(fft2(img) / mf)
bmu = fftshow(bmi, "abs")
elif type == "con":
mf[np.where(abs(mf) < d)] = 1
bmi = abs(ifft2(fft2(img) / mf))
bmu = img_as_ubyte(bmi / bmi.max())
bmu = rescale_intensity(bmu, in_range=(0, 128))
return bmu
```
## Chapter 9
```
def threshold_adaptive(img, cut):
r, c = img.shape
w = c // cut
starts = range(0, c - 1, w)
ends = range(w, c + 1, w)
z = zeros((r, c))
for i in range(cut):
tmp = img[:, starts[i]:ends[i]]
z[:, starts[i]:ends[i]] = tmp > threshold_otsu(tmp)
return z
def zerocross(img):
r, c = img.shape
z = np.zeros_like(img)
for i in range(1, r - 1):
for j in range(1, c - 1):
if (img[i][j] < 0 and (img[i - 1][j] > 0 or img[i + 1][j] > 0 or img[i][j - 1] > 0 or img[i][j + 1] > 0)) or \
(img[i][j] == 0 and (img[i - 1][j] * img[i + 1][j] < 0 or img[i][j - 1] * img[i][j + 1] < 0)):
z[i][j] = 1
return z
def laplace_zerocross(img):
return zerocross(ndi.laplace(float64(img), mode="constant"))
def marr_hildreth(img, sigma=0.5):
return zerocross(ndi.gaussian_laplace(float64(img), sigma=sigma))
```
## Chapter 10
```
sq = square(3)
cr = array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]])
sq
cr
def internal_boundary(a, b):
'''
A - (A erosion B)
'''
return a - binary_erosion(a, b)
def external_boundary(a, b):
'''
(A dilation B) - A
'''
return binary_dilation(a, b) - a
def morphological_gradient(a, b):
'''
(A dilation B) - (A erosion B)
'''
return binary_dilation(a, b) * 1 - binary_erosion(a, b)
def hit_or_miss(t, b1):
'''
(A erosion B1) and (not A erosion B2)
'''
r, c = b1.shape
b2 = ones((r + 2, c + 2))
b2[1:r+1, 1:c+1] = 1 - b1
t = img_as_bool(t)
tb1 = binary_erosion(t, b1)
tb2 = binary_erosion(1 - t, b2)
x, y = np.where((tb1 & tb2) == 1)
tb3 = np.zeros_like(tb1)
tb3[x, y] = 1
return x, y, tb1, tb2, tb3
def bwskel(img, kernel=sq):
skel = zeros_like(img, dtype=bool)
e = (np.copy(img) > 0) * 1
while e.max() > 0:
o = binary_opening(e, kernel) * 1
skel = skel | (e & (1 - o))
e = binary_erosion(e, kernel) * 1
return skel
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tensorflow as tf
import os
from imageio import imwrite
from tqdm import tqdm
!ls imgs
# no need to resize yet
raw_img = tf.io.read_file('imgs/grundlsee_jesus.jpeg')
content_img = tf.image.decode_image(raw_img)[None, ...]
content_img = tf.cast(content_img, tf.float32) / 255.
gen_img = content_img[:]
content_img = tf.image.resize(content_img, size=tuple([v // 2 for v in content_img.shape[1:3]]))
content_resized_shape = content_img.shape[1:3]
print(content_resized_shape)
raw_img = tf.io.read_file('imgs/middleevil.jpg')
style_img = tf.image.decode_image(raw_img)[None, :, :, :]
style_img = tf.image.resize(style_img, size=content_img.shape[1:3])#[..., :-1]
style_img = tf.cast(style_img, tf.float32) / 255.
plt.figure(figsize=(17, 6))
plt.subplot(121)
plt.imshow(content_img[0])
plt.axis('off')
plt.subplot(122)
plt.imshow(style_img[0])
plt.axis('off')
plt.show()
def content_loss(g_acts, c_acts, weights):
return 0.5 * sum([w * tf.reduce_sum(tf.math.squared_difference(g_act, c_act))
for w, g_act, c_act in zip(weights, g_acts, c_acts)])
def style_loss(g_acts, s_acts, weights):
loss = 0
for w, g_act, s_act in zip(weights, g_acts, s_acts):
g_gram = tf.einsum('bijk, bijl -> kl', g_act, g_act)
s_gram = tf.einsum('bijk, bijl -> kl', s_act, s_act)
NM = tf.cast(tf.reduce_prod(g_act.shape), tf.float32)
loss += w * 0.25 / NM**2 * tf.reduce_sum(tf.math.squared_difference(g_gram, s_gram))
return loss
def total_variation_loss(gen_img):
x_deltas = gen_img[:, 1:, :, :] - gen_img[:, :-1, :, :]
y_deltas = gen_img[:, :, 1:, :] - gen_img[:, :, :-1, :]
return tf.reduce_mean(x_deltas**2) + tf.reduce_mean(y_deltas**2)
# Names of layers and weight values for content and style losses
content_layers = ['block4_conv2']
content_weights = [1] * len(content_layers)
style_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1']
style_weights = [1] * len(style_layers)
# Loading in VGG-19
vgg19 = tf.keras.applications.vgg19.VGG19(include_top=False,
weights='imagenet',
input_tensor=None,
input_shape=content_img.shape[1:],
pooling='avg',
classes=1000)
vgg19.trainable = False
# Define new model to get activations
outputs = [vgg19.get_layer(layer_name).output for layer_name in (content_layers + style_layers)]
vgg19_activations = tf.keras.Model([vgg19.input], outputs)
gen_image_name = "grundlsee_middleevil_jesus.png"
num_epochs = 11
log_freq = 10
learn_rate = 2e-2
content_weight = 1
style_weight = 1e3
reg_weight = 1e8
optimizer = tf.optimizers.Adam(learning_rate=learn_rate, beta_1=0.99, epsilon=1e-1)
# gen_img = tf.random.uniform(minval=0., maxval=1., shape=content_img.shape)
gen_img = tf.Variable(gen_img)
# Precompute content and style activatiosn
c_input = tf.keras.applications.vgg19.preprocess_input(content_img * 255.)
s_input = tf.keras.applications.vgg19.preprocess_input(style_img * 255.)
c_acts = vgg19_activations(c_input)[:len(content_layers)]
s_acts = vgg19_activations(s_input)[len(content_layers):]
for epoch in tqdm(range(num_epochs)):
with tf.GradientTape() as tape:
g_input = tf.image.resize(gen_img, size=content_resized_shape)
g_input = tf.keras.applications.vgg19.preprocess_input(g_input * 255.)
g_acts = vgg19_activations(g_input)
c_loss = content_loss(g_acts[:len(content_layers)], c_acts, content_weights)
s_loss = style_loss(g_acts[len(content_layers):], s_acts, style_weights)
loss = content_weight * c_loss + style_weight * s_loss
loss = loss + reg_weight * total_variation_loss(gen_img)
grads = tape.gradient(loss, gen_img)
optimizer.apply_gradients([(grads, gen_img)])
# Bring image back to a valid range
gen_img.assign(tf.clip_by_value(gen_img, clip_value_min=0., clip_value_max=1.))
if epoch % log_freq == 0:
plt.figure(figsize=(17, 6))
plt.subplot(131)
plt.imshow(tf.squeeze(content_img).numpy())
plt.axis('off')
plt.subplot(132)
plt.imshow(tf.squeeze(style_img).numpy())
plt.axis('off')
plt.subplot(133)
plt.imshow(tf.squeeze(gen_img).numpy())
plt.axis('off')
plt.show()
result = tf.squeeze(gen_img).numpy()
# Save resulting image
# fig = plt.figure()
# ax = plt.Axes(fig, [0., 0., 1., 1.])
# ax.set_axis_off()
# fig.add_axes(ax)
# ax.imshow(result)
# plt.savefig('girl_over_soul.png', dpi=800)
# plt.close()
if not os.path.exists("imgs/" + gen_image_name):
result = (result * 255).astype(np.uint8)
imwrite("imgs/" + gen_image_name, result)
else:
print(gen_image_name + " already exists!")
result = tf.squeeze(gen_img).numpy()
# Save resulting image
# fig = plt.figure()
# ax = plt.Axes(fig, [0., 0., 1., 1.])
# ax.set_axis_off()
# fig.add_axes(ax)
# ax.imshow(result)
# plt.savefig('girl_over_soul.png', dpi=800)
# plt.close()
if not os.path.exists("imgs/" + gen_image_name):
result = (result * 255).astype(np.uint8)
imwrite("imgs/" + gen_image_name, result)
else:
print(gen_image_name + " already exists!")
```
| github_jupyter |
# Task 4: Support Vector Machines
_All credit for the code examples of this notebook goes to the book "Hands-On Machine Learning with Scikit-Learn & TensorFlow" by A. Geron. Modifications were made and text was added by K. Zoch in preparation for the hands-on sessions._
# Setup
First, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Function to save a figure. This also decides that all output files
# should stored in the subdirectorz 'classification'.
PROJECT_ROOT_DIR = "."
EXERCISE = "SVMs"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "output", EXERCISE, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Large margin *vs* margin violations
This code example contains two linear support vector machine classifiers ([LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html), which are initialised with different hyperparameter C. The used dataset is the iris dataset also shown in the lecture (iris verginica vcs. iris versicolor). Try a few different values for C and compare the results! What effect do different values of C have on: (1) the width of the street, (2) the number of outliers, (3) the number of support vectors?
```
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
# Load the dataset and store the necessary features/labels in X/y.
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
# Initialise a scaler and the two SVC instances.
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", max_iter=10000, random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", max_iter=10000, random_state=42)
# Create pipelines to automatically scale the input.
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
# Perform the actual fit of the two models.
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
# Now do the plotting.
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot")
```
# Polynomial features vs. polynomial kernels
Let's create a non-linear dataset, for which we can compare two approaches: (1) adding polynomial features to the model, (2) using a polynomial kernel (see exercise sheet). First, create some random data.
```
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
```
Now let's first look at a linear SVM classifier that uses polynomial features. We will implement them through a pipeline including scaling of the inputs. What happens if you increase the degrees of polynomial features? Does the model get better? How is the computing time affected? Hint: you might have to increase the `max_iter` parameter for higher degrees.
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", max_iter=1000, random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
```
Now let's try the same without polynomial features, but a polynomial kernel instead. What is the fundamental difference between these two approaches? How do they scale in terms of computing time: (1) as a function of the number of features, (2) as a function of the number of instances?
1. Try out different degrees for the polynomial kernel. Do you expect any changes in the computing time? How does the model itself change in the plot?
2. Try different values for the `coef0` parameter. Can you guess what it controls? You should be able to see different behaviour for different degrees in the kernel.
3. Try different values for the hyperparameter C, which controls margin violations.
```
from sklearn.svm import SVC
# Let's make one pipeline with polynomial kernel degree 3.
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
# And another pipeline with polynomial kernel degree 10.
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
# Now start the plotting.
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
```
# Gaussian kernels
Before trying the following piece of code which implements Gaussian RBF (Radial Basis Function) kernels, remember _similarity features_ that were discussed in the lecture:
1. What are similarity features? What is the idea of adding a "landmark"?
2. If similarity features help to increase the power of the model, why should we be careful to just add a similarity feature for _each_ instance of the dataset?
3. How does the kernel trick (once again) save the day in this case?
4. What does the `gamma` parameter control?
Below you find a code implementation which creates a set of four plots with different values for gamma and hyperparameter C. Try different values for both. Which direction _increases_ regularisation of the model? In which direction would you go to avoid underfitting? In which to avoid overfitting?
```
from sklearn.svm import SVC
# Set up multiple values for gamma and hyperparameter C
# and create a list of value pairs.
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
# Store multiple SVM classifiers in a list with these sets of
# hyperparameters. For all of them, use a pipeline to allow
# scaling of the inputs.
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
# Now do the plotting.
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show()
```
# Regression
The following code implements the support vector regression class from Scikit-Learn ([SVR](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)). Here are a couple of questions (some of which require changes to the code, others are just conceptual:
1. Quick recap: whereas the SVC class tries to make a classification decision, what is the job of this regression class? How is the output different?
2. Try different values for the hyperparameter C. What does it control?
3. How should the margin of a 'good' SVR model look like? Should it be broad? Should it be narrow? How does the parameter epsilon affect this?
```
# Generate some random data (degree = 2).
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
# Import the support vector regression class and create two
# instances with different hyperparameters.
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="auto")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
# Now do the plotting.
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.