path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
examples/example_regression.ipynb | ###Markdown
Example RegressionIn this notebook will be showed how to use lit-saint for a regression problem. We will use the "California housing" dataset in which the objective is to predict the Median house value for households within a block (measured in US Dollars) Import libraries
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_california_housing
from sklearn.metrics import explained_variance_score, mean_absolute_error, mean_squared_error
from sklearn.model_selection import train_test_split
from lit_saint import Saint, SaintConfig, SaintDatamodule, SaintTrainer
from pytorch_lightning import Trainer, seed_everything
###Output
_____no_output_____
###Markdown
Download Data
###Code
df = fetch_california_housing(as_frame=True)
###Output
_____no_output_____
###Markdown
Configure lit-saint
###Code
# if you want to used default value for the parameters
cfg = SaintConfig()
# otherwise you can use hydra to read a config file (uncomment the following part)
# from hydra.core.config_store import ConfigStore
# cs = ConfigStore.instance()
# cs.store(name="base_config", node=SaintConfig)
# with initialize(config_path="."):
# cfg = compose(config_name="config")
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
seed_everything(42, workers=True)
df = df.frame
df_train, df_test = train_test_split(df, test_size=0.10, random_state=42)
df_train, df_val = train_test_split(df_train, test_size=0.10, random_state=42)
df_train["split"] = "train"
df_val["split"] = "validation"
df = pd.concat([df_train, df_val])
# The target is in the column MedHouseVal and we can see that it contains some floats so the library will considered the problem as a regression
df.head()
###Output
Global seed set to 42
###Markdown
Fit the model
###Code
data_module = SaintDatamodule(df=df, target="MedHouseVal", split_column="split")
model = Saint(categories=data_module.categorical_dims, continuous=data_module.numerical_columns,
config=cfg, dim_target=data_module.dim_target)
pretrainer = Trainer(max_epochs=cfg.pretrain.epochs)
trainer = Trainer(max_epochs=10)
saint_trainer = SaintTrainer(pretrainer=pretrainer, trainer=trainer)
saint_trainer.fit(model=model, datamodule=data_module, enable_pretraining=True)
###Output
_____no_output_____
###Markdown
Make predictions
###Code
prediction = saint_trainer.predict(model=model, datamodule=data_module, df=df_test)
df_test["prediction"] = prediction
expl_variance = explained_variance_score(df_test["MedHouseVal"], df_test["prediction"])
mae = mean_absolute_error(df_test["MedHouseVal"], df_test["prediction"])
mse = mean_squared_error(df_test["MedHouseVal"], df_test["prediction"])
print(f"Explained Variance: {expl_variance} MAE: {mae} MSE: {mse}")
###Output
Explained Variance: 0.6835245941402444 MAE: 0.4766954095214282 MSE: 0.42516899954601584
###Markdown
Uncertainty Estimation
###Code
mc_prediction = saint_trainer.predict(model=model, datamodule=data_module, df=df_test, mc_dropout_iterations=4)
mc_prediction
# Given the predictions we can compute the variance across the iterations, so axis=2
var_prediction = np.var(mc_prediction,axis=2)
# Then we focus our attention on the variance of the first class
pd.DataFrame(var_prediction[:,0], columns=["variance"]).hist()
###Output
_____no_output_____
###Markdown
XGBClassifier Test
###Code
import os.path
import sys
sys.path.append(os.path.abspath(os.path.join(os.path.abspath(''),os.path.pardir)))
from datk.model import ModelTrainer
m = ModelTrainer(cmd='fit',data_path="./housing.csv",yaml_path="./xgb_model_regression.yaml",results_path="./xgb_regression/model_results")
m._load_model()
m.model.feature_importances_
m.model.get_booster().feature_names
###Output
_____no_output_____ |
germany/gathering_data.ipynb | ###Markdown
1) Сбор данных Google Trends
###Code
from pytrends.request import TrendReq
import pandas as pd
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.dates as mdates
import csv
# скачаем данные с google doc файла
data = pd.read_csv('https://docs.google.com/spreadsheets/d/10WgVF3KSGS969coU-YMMjY2b_NIUEAPz0jFU1OApwPs/export?format=csv&gid=0')
data=data.loc[data['trends-de'] == 1]
# возьмём значения по ключевым словам-запросам
kw_list = data['Запрос-DE'].values.tolist()
#kw_list = ['потеря обоняния', 'потеря обоняния при ковид' ]
pytrends = TrendReq(hl='DE', tz=180)
# Параметры google
geo = 'DE'
timeframe = '2020-01-27 2021-05-09'
step=1
# Функция вычисления тренда
def trends (kw_list):
pytrends.build_payload(kw_list, timeframe=timeframe, geo=geo, gprop='')
iot_df = pytrends.interest_over_time()
iot_df = iot_df.reset_index()
iot_df = iot_df.drop(['isPartial'], axis=1)
return iot_df
# Функция сборки всего датасета (step - максимально разрешённое количество одновременных запросов, здесь - 5)
def step_data(step):
query_df = trends(kw_list[:1])[['date']]
p=step
for i in range(0, len(kw_list), p):
df = trends(kw_list[i:i+p])
query_df = pd.concat([query_df, df.drop(['date'],axis=1)], axis=1)
return query_df
df_api = step_data(step)
df_api.to_excel('raw_data//google_de.xlsx',index=False)
df_api
###Output
_____no_output_____
###Markdown
2) Сбор данных по просмотрам страниц WIKI
###Code
!pip install git+https://github.com/Commonists/pageview-api.git
import pageviewapi
from attrdict import AttrDict
import pandas as pd
import datetime
def wikipageview_to_df(n_wikipedia, page_name_list, start_date, stop_date):
"""
Функция собирает статистику просмотра страницы Википедии в датасет.
Для работы функции необходимо установвить pageview-API c https://pypi.org/project/pageviewapi/ и attrdict
Arguments:
n_wikipedia: версия Википедии в формате str ('ru.wikipedia', 'de.wikipedia')
page_name_list: перечень страниц
start_date: начальная дата в формате 'yyyymmdd'
stop_date: конечная дата в формате 'yyyymmdd'
Returns:
Pandas DataFrame
"""
import pageviewapi
from attrdict import AttrDict
import pandas as pd
import datetime
total_wiki_dict = dict()
for i in range(len(page_name_list)-1):
for page_name in page_name_list:
pageview_dict = pageviewapi.per_article(n_wikipedia, page_name, start_date, stop_date,
access='all-access', agent='all-agents', granularity='daily')
timestamp = list()
views = list()
article = set()
timeview_dict = dict()
article_dict = dict()
for j in range(len(pageview_dict('items'))):
timestamp.append(pageview_dict('items')[j]('timestamp'))
views.append(pageview_dict('items')[j]('views'))
article.add(pageview_dict('items')[j]('article'))
j+=1
for l,m in zip(timestamp, views):
timeview_dict[l]=m
for article in article:
article_dict[article] = timeview_dict
total_wiki_dict.update(article_dict)
i+=1
wiki_df = pd.DataFrame.from_dict(total_wiki_dict)
wiki_df['Date'] = wiki_df.index[:]
wiki_df['Date'] = wiki_df['Date'].map(lambda x: str(x)[:-2])
wiki_df['Date'] = pd.to_datetime(wiki_df['Date'], format='%Y-%m-%d')
wiki_df['WD'] = wiki_df['Date'].dt.weekday
wiki_df = wiki_df.groupby(pd.Grouper(key='Date', freq='W')).sum()
wiki_df = wiki_df.drop(['WD'], axis = 1)
return wiki_df
###Output
_____no_output_____
###Markdown
Формируем список запросов и собираем DataFrame
###Code
# Запрос для сбора статистики по немецкой Википедии
n_wikipedia = 'de.wikipedia'
page_name_list = list(['Anosmie', # Аносмия
'Lungenentzündung', # Пневмония
'COVID-19',
'SARS-CoV-2',
'Husten', # Кашель
'Fieber', # Жар
'Hydroxychloroquin', # Гидроксихлорохин
'Amoxicillin', # Амоксициллин
'Azithromycin', # Азитромицин
'Pulsoxymetrie', # Пульсоксиметрия, Пульсоксиметр
'Rhinitis', # Ринит
'Beatmungsgerät', # аппарат ИВЛ
'CoronaVac',
'SARS-CoV-2-Impfstoff', # Вакцина против SARS-CoV-2
'Sputnik_V'])
start_date = '20190901'
stop_date = '20210515'
df_de = wikipageview_to_df(n_wikipedia, page_name_list, start_date, stop_date)
df_de.to_excel('raw_data//wiki_de.xlsx',index=False)
###Output
_____no_output_____
###Markdown
3) Сбор данных по просмотрам Yandex
###Code
import argparse
import datetime
import urllib
from pathlib import Path
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import ElementClickInterceptedException, TimeoutException, NoSuchElementException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from transliterate import translit
from webdriver_manager.chrome import ChromeDriverManager
YANDEX_WORDSTAT_URL = 'https://wordstat.yandex.com'
YANDEX_WORDSTAT_HISTORY_URL = f'{YANDEX_WORDSTAT_URL}/#!/history?period=weekly®ions=225&words='
OUTPUT_DATA_FOLDER = 'downloaded_data'
SHORT_TIMEOUT = 5
LONG_TIMEOUT = 180
HEADER_COLUMN_NAMES = ['search_query', 'period_start', 'period_end', 'absolute_value']
DATE_FORMAT = "%Y-%m-%d_%H-%M-%S"
LOGIN_XPATH = "//td[@class='b-head-userinfo__entry']/a/span"
USERNAME_XPATH = "//div[@class='b-domik__username']/span/span/input"
PASSWORD_XPATH = "//div[@class='b-domik__password']/span/span/input"
SUBMIT_XPATH = "//div[@class='b-domik__button']/span"
SEARCH_RESULTS_TABLE_XPATH = "//div[@class='b-history__table-box']"
CAPTCHA_INPUT_XPATH = "//td[@class='b-page__captcha-input-td']/span/span/input"
SEARCH_INPUT_XPATH = "//td[@class='b-search__col b-search__input']/span"
request_words = ['потеря обоняния', 'симптомы короновируса', 'Анализы на короновирус', 'Признаки короновируса',
'КТ легких', 'Регламент лечения короновирусной инфекции', 'Схемы лечения препаратами короновируса',
'Показания для госпитализации с подозрением на коронавирусную болезнь',
'Купить противовирусные препараты (Фавипиравир, Ремдесивир)', 'Купить Гидроксихлорохин',
'Купить амоксициллин, азитромицин, левофлоксацин, моксифлоксацин', 'Купить антикоагулянты для лечения',
'Как защититься от короновируса', 'КОВИД-19', 'потеря вкуса', 'кашель', 'поражение сосудистой стенки',
'оксигенация', 'дыхательная функция', 'сатурация', 'пандемия', 'респираторная вирусная инфекция',
'SARS-Cov-2', 'пневмония', 'дыхательная недостаточность', 'насморк', 'мышечная слабость',
'высокая температура', 'озноб', 'рисунок матовое стекло', 'одышка', 'бессонница',
'положительный результат теста на ПЦР', 'наличие антител М', 'наличие антител G', 'Коронавир',
'Парацетомол', 'Дексаметазон', 'Уровень кислорода в крови', 'Кислород в крови', 'Вакцинация',
'Спутник V', 'иммунитет', 'АстраЗенека', 'Pfizer', 'побочные эффекты', 'осложенния',
'противопоказания', 'тромбоз', 'ЭпиВакКорона', 'КовиВак', 'инкубационный период', 'вирулентность',
'мутации', 'Британский штамм', 'маска', 'перчатки', 'срок жизни вируса на поверхности', 'профилактика',
'антитела g короновирус ковид -19', 'антитела к ковиду показатели', 'вакцина гам ковид вак',
'вакцина гам ковид вак и спутник м это одно и тоже или нет', 'вакцины от ковида',
'гам ковид вак или спутник м', 'гам-ковид-вак', 'гам-ковид-вак и спутник одно и тоже', 'ковид',
'ковид статистика россия', 'прививка от ковид -19 спутник инструкция противопоказания',
'прививки от ковида', 'регистр ковид']
def site_login(driver, login, password):
driver.implicitly_wait(1)
driver.get(YANDEX_WORDSTAT_URL)
login_element = driver.find_element(By.XPATH, LOGIN_XPATH)
login_element.click()
username_input = driver.find_element(By.XPATH, USERNAME_XPATH)
password_input = driver.find_element(By.XPATH, PASSWORD_XPATH)
submit_span = driver.find_element(By.XPATH, SUBMIT_XPATH)
username_input.send_keys(login)
password_input.send_keys(password)
submit_span.click()
def is_captcha_visible(driver):
try:
captcha_input = WebDriverWait(driver, 3).until(
EC.visibility_of_element_located((By.XPATH, CAPTCHA_INPUT_XPATH)))
return captcha_input.is_displayed()
except TimeoutException:
return False
def solve_captcha(driver):
driver.switch_to.window(driver.current_window_handle)
try:
WebDriverWait(driver, SHORT_TIMEOUT).until(EC.visibility_of_element_located((By.XPATH, CAPTCHA_INPUT_XPATH)))
WebDriverWait(driver, LONG_TIMEOUT).until_not(EC.visibility_of_element_located((By.XPATH, CAPTCHA_INPUT_XPATH)))
except TimeoutException:
pass
def parse_html(html):
soup = BeautifulSoup(html, 'html.parser')
trs = soup.find_all('tr', {"class": ["odd", "even"]})
rows = []
for tr in trs:
td_iterator = tr.findChildren("td", recursive=False)
f1, f2 = td_iterator[0].text.replace(u'\xa0', ' ').replace(' ', '').split('-')
f3 = td_iterator[2].text
rows.append((f1, f2, f3))
return rows
def check_for_captcha_and_solve_it(driver):
search_span = driver.find_element(By.XPATH, SEARCH_INPUT_XPATH)
try:
search_span.click()
except ElementClickInterceptedException:
while is_captcha_visible(driver):
solve_captcha(driver)
search_span_clickable = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, SEARCH_INPUT_XPATH)))
search_span_clickable.click()
def write_csv_file(search_query_words, data_rows, path, divisor='|'):
f_name = translit(search_query_words, 'ru', reversed=True).replace(' ', '_')
file_full_path = f"{path}/{f_name}.csv"
with open(file_full_path, 'w') as f:
f.write(f"{divisor.join(HEADER_COLUMN_NAMES)}\n")
for row in data_rows:
f.write(f"{divisor.join([search_query_words, row[0], row[1], row[2]])}\n")
def try_and_parse_data(driver):
try:
stats_table = driver.find_element(By.XPATH, SEARCH_RESULTS_TABLE_XPATH)
stats_table_visible = WebDriverWait(driver, 10).until(
EC.visibility_of(stats_table)
)
return stats_table_visible.get_attribute('innerHTML')
except NoSuchElementException:
# nothing found
return None
def parse_and_write_to_file(search_query_words, table_html, path):
data_rows = parse_html(table_html)
write_csv_file(search_query_words, data_rows, path)
def parse_content_by_url(driver, search_query_words, path):
encoded_words = urllib.parse.quote(search_query_words.encode('utf-8'))
url = YANDEX_WORDSTAT_HISTORY_URL + encoded_words
driver.get(url)
check_for_captcha_and_solve_it(driver)
data = try_and_parse_data(driver)
if data:
parse_and_write_to_file(search_query_words, data, path)
def create_download_folder_if_not_exists(path, create_subfolder):
Path(path).mkdir(parents=True, exist_ok=True)
if create_subfolder:
t = datetime.datetime.now()
subfolder_name = t.strftime(DATE_FORMAT)
final_path = f"{path}/{subfolder_name}"
Path(final_path).mkdir()
return final_path
def parse_arguments():
parser = argparse.ArgumentParser(description='Parse yandex wordstat for query results')
parser.add_argument('username', type=str, nargs=1,
help='username to authenticate with yandex')
parser.add_argument('password', type=str, nargs=1,
help='password to authenticate with yandex')
args = parser.parse_args()
return args.username[0], args.password[0]
def main():
yandex_login, yandex_password = parse_arguments()
output_data_path = create_download_folder_if_not_exists(OUTPUT_DATA_FOLDER, True)
with webdriver.Chrome(ChromeDriverManager().install()) as driver:
site_login(driver, yandex_login, yandex_password)
for word in request_words:
parse_content_by_url(driver, word, output_data_path)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Данные по яндексу есть, но обновление этих данных находиться в процессе доработки 4) Сбор официальной статистики по заболевшим в Covid-2019 в Германии
###Code
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv')
l = data.keys().tolist()
l.remove('new_cases')
l.remove('date')
l.remove('location')
data.drop(columns = l, inplace=True)
data_Russia = data[data['location'] == 'Germany']
data_Russia['w'] = pd.to_datetime(data_Russia['date'], format = '%Y-%m-%d').dt.strftime("%V")
data_Russia['y'] = pd.to_datetime(data_Russia['date'], format = '%Y-%m-%d').dt.strftime("%Y")
df = data_Russia.groupby(['w','y'])['new_cases'].sum()
DF_mdate = data_Russia.groupby(['w','y'])['date'].max()
df = pd.DataFrame(df)
DF_mdate = pd.DataFrame(DF_mdate)
D_col = DF_mdate['date']
df.insert(0, 'date', D_col)
df.sort_values(by = ['y', 'w'], inplace=True)
target = df.reset_index()
target.drop(['w','y'], axis = 1, inplace=True)
New_cases_Covid_Russia = target.T
target_c= New_cases_Covid_Russia#[range(12,64)]
target_t = target_c.T
target_t = target_t.reset_index(drop = True)
target_c = target_t.T
target_f = target_c.T
target_f.to_excel('raw_data//target_de.xlsx',index=False)
#финальный таргет
target_f
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
|
dev_nbs/course/lesson6-rossmann.ipynb | ###Markdown
Rossmann Data preparation To create the feature-engineered train_clean and test_clean from the Kaggle competition data, run `rossman_data_clean.ipynb`. One important step that deals with time series is this:```pythonadd_datepart(train, "Date", drop=False)add_datepart(test, "Date", drop=False)```
###Code
path = Config().data/'rossmann'
train_df = pd.read_pickle(path/'train_clean')
train_df.head().T
n = len(train_df); n
###Output
_____no_output_____
###Markdown
Experimenting with a sample
###Code
idx = np.random.permutation(range(n))[:2000]
idx.sort()
small_df = train_df.iloc[idx]
small_cont_vars = ['CompetitionDistance', 'Mean_Humidity']
small_cat_vars = ['Store', 'DayOfWeek', 'PromoInterval']
small_df = small_df[small_cat_vars + small_cont_vars + ['Sales']].reset_index(drop=True)
small_df.head()
small_df.iloc[1000:].head()
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), Categorify, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items.head()
to.valid.items.head()
to.classes['DayOfWeek']
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), FillMissing, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items[to.train.items['CompetitionDistance_na'] == True]
###Output
_____no_output_____
###Markdown
Preparing full data set
###Code
train_df = pd.read_pickle(path/'train_clean')
test_df = pd.read_pickle(path/'test_clean')
len(train_df),len(test_df)
procs=[FillMissing, Categorify, Normalize]
dep_var = 'Sales'
cat_names = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'StoreType', 'Assortment',
'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw',
'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw']
cont_names = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h',
'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
dep_var = 'Sales'
df = train_df[cat_names + cont_names + [dep_var,'Date']].copy()
test_df['Date'].min(), test_df['Date'].max()
cut = train_df['Date'][(train_df['Date'] == train_df['Date'][len(test_df)])].index.max()
cut
splits = (list(range(cut, len(train_df))),list(range(cut)))
train_df[dep_var].head()
train_df[dep_var] = np.log(train_df[dep_var])
#train_df = train_df.iloc[:100000]
#cut = 20000
splits = (list(range(cut, len(train_df))),list(range(cut)))
%time to = TabularPandas(train_df, procs, cat_names, cont_names, dep_var, y_block=TransformBlock(), splits=splits)
dls = to.dataloaders(bs=512, path=path)
dls.show_batch()
###Output
_____no_output_____
###Markdown
Model
###Code
max_log_y = np.log(1.2) + np.max(train_df['Sales'])
y_range = (0, max_log_y)
dls.c = 1
learn = tabular_learner(dls, layers=[1000,500], loss_func=MSELossFlat(),
config=tabular_config(ps=[0.001,0.01], embed_p=0.04, y_range=y_range),
metrics=exp_rmspe)
learn.model
len(dls.train_ds.cont_names)
learn.lr_find()
learn.fit_one_cycle(5, 3e-3, wd=0.2)
###Output
_____no_output_____
###Markdown
(10th place in the competition was 0.108)
###Code
learn.recorder.plot_loss(skip_start=1000)
###Output
_____no_output_____
###Markdown
(10th place in the competition was 0.108) Inference on the test set
###Code
test_to = to.new(test_df)
test_to.process()
test_dls = test_to.dataloaders(bs=512, path=path, shuffle_train=False)
learn.metrics=[]
tst_preds,_ = learn.get_preds(dl=test_dls.train)
np.exp(tst_preds.numpy()).T.shape
test_df["Sales"]=np.exp(tst_preds.numpy()).T[0]
test_df[["Id","Sales"]] = test_df[["Id","Sales"]].astype("int")
test_df[["Id","Sales"]].to_csv("rossmann_submission.csv",index=False)
###Output
_____no_output_____
###Markdown
Rossmann Data preparation To create the feature-engineered train_clean and test_clean from the Kaggle competition data, run `rossman_data_clean.ipynb`. One important step that deals with time series is this:```pythonadd_datepart(train, "Date", drop=False)add_datepart(test, "Date", drop=False)```
###Code
path = Config().data/'rossmann'
train_df = pd.read_pickle(path/'train_clean')
train_df.head().T
n = len(train_df); n
###Output
_____no_output_____
###Markdown
Experimenting with a sample
###Code
idx = np.random.permutation(range(n))[:2000]
idx.sort()
small_df = train_df.iloc[idx]
small_cont_vars = ['CompetitionDistance', 'Mean_Humidity']
small_cat_vars = ['Store', 'DayOfWeek', 'PromoInterval']
small_df = small_df[small_cat_vars + small_cont_vars + ['Sales']].reset_index(drop=True)
small_df.head()
small_df.iloc[1000:].head()
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), Categorify, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items.head()
to.valid.items.head()
to.classes['DayOfWeek']
splits = [list(range(1000)),list(range(1000,2000))]
to = TabularPandas(small_df.copy(), FillMissing, cat_names=small_cat_vars, cont_names=small_cont_vars, splits=splits)
to.train.items[to.train.items['CompetitionDistance_na'] == True]
###Output
_____no_output_____
###Markdown
Preparing full data set
###Code
train_df = pd.read_pickle(path/'train_clean')
test_df = pd.read_pickle(path/'test_clean')
len(train_df),len(test_df)
procs=[FillMissing, Categorify, Normalize]
dep_var = 'Sales'
cat_names = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'StoreType', 'Assortment',
'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw',
'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw']
cont_names = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h',
'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
dep_var = 'Sales'
df = train_df[cat_names + cont_names + [dep_var,'Date']].copy()
test_df['Date'].min(), test_df['Date'].max()
cut = train_df['Date'][(train_df['Date'] == train_df['Date'][len(test_df)])].index.max()
cut
splits = (list(range(cut, len(train_df))),list(range(cut)))
train_df[dep_var].head()
train_df[dep_var] = np.log(train_df[dep_var])
#train_df = train_df.iloc[:100000]
#cut = 20000
splits = (list(range(cut, len(train_df))),list(range(cut)))
%time to = TabularPandas(train_df, procs, cat_names, cont_names, dep_var, y_block=TransformBlock(), splits=splits)
dls = to.dataloaders(bs=512, path=path)
dls.show_batch()
###Output
_____no_output_____
###Markdown
Model
###Code
max_log_y = np.log(1.2) + np.max(train_df['Sales'])
y_range = (0, max_log_y)
dls.c = 1
learn = tabular_learner(dls, layers=[1000,500], loss_func=MSELossFlat(),
config=tabular_config(ps=[0.001,0.01], embed_p=0.04, y_range=y_range),
metrics=exp_rmspe)
learn.model
len(dls.train_ds.cont_names)
learn.lr_find()
learn.fit_one_cycle(5, 3e-3, wd=0.2)
###Output
_____no_output_____
###Markdown
(10th place in the competition was 0.108)
###Code
learn.recorder.plot_loss(skip_start=1000)
###Output
_____no_output_____
###Markdown
(10th place in the competition was 0.108) Inference on the test set
###Code
test_to = to.new(test_df)
test_to.process()
test_dls = test_to.dataloaders(bs=512, path=path, shuffle_train=False)
learn.metrics=[]
tst_preds,_ = learn.get_preds(dl=test_dls.train)
np.exp(tst_preds.numpy()).T.shape
test_df["Sales"]=np.exp(tst_preds.numpy()).T[0]
test_df[["Id","Sales"]] = test_df[["Id","Sales"]].astype("int")
test_df[["Id","Sales"]].to_csv("rossmann_submission.csv",index=False)
###Output
_____no_output_____ |
final-project-eda.ipynb | ###Markdown
Set Up
###Code
# Read in dat
import ggplot
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
import pandas as pd
import seaborn as sns # for visualiation
from scipy.stats import ttest_ind # t-tests
import statsmodels.formula.api as smf # linear modeling
import statsmodels.api as sm
import matplotlib
from sklearn import metrics
matplotlib.style.use('ggplot')
%matplotlib inline
data = pd.read_csv('~/top5europe.csv')
df = data
#import the module so that we can tables when printing dataframes
from IPython.display import display, HTML
pd.options.mode.chained_assignment = None
###Output
_____no_output_____
###Markdown
Data PreparationMapped out the countries in Europe with the five highest life expectancies (Switzerland, Spain, Italy, Iceland, France) to have a corresponding number (1, 2, 3, 4, 5 respectively) based on their rank. Removed the one outlier ridiculously high rate.
###Code
df1 = df
df1['location_name'] = df1['location_name'].map({'Switzerland': 1, 'Spain': 2, 'Italy': 3, 'Iceland': 4, 'France': 5})
df1 = df1[df1.val < 50]
df1.head()
###Output
_____no_output_____
###Markdown
Describing Data Structure
###Code
shape = df1.shape
print "Size: %s" % (shape,)
print "Variables: Location (str), Sex (str), Age (str), Cause of Death (str), Risk Factors (str), Average Rate (int)"
###Output
Size: (731, 18)
Variables: Location (str), Sex (str), Age (str), Cause of Death (str), Risk Factors (str), Average Rate (int)
###Markdown
Univariate Analysis
###Code
df1.describe()
###Output
_____no_output_____
###Markdown
Univariate Analysis by Category
###Code
#ax = df1['val'].plot(kind='bar', title ="V comp", figsize=(15, 10), legend=True, fontsize=12)
#ax.set_xlabel("Hour", fontsize=12)
#ax.set_ylabel("V", fontsize=12)
#plt.show()
df1['val'].hist(by=df1['location_name'], sharex=True, figsize=(20,10))
###Output
_____no_output_____
###Markdown
Bivariate analysis
###Code
lm = smf.glm(formula = 'location_name ~ val', data=df1, family=sm.families.Poisson()).fit()
df1['lm'] = lm.predict()
lm.summary()
fig, ax = plt.subplots(figsize=(10, 5))
ax.scatter(df1.location_name, df1.val, c='red', label="Rate of causes of death")
ax.plot(df1.location_name, df1.lm, c='black', label="Logistic Fit")
ax.legend(numpoints=1, loc='upper left')
ax.set_xlabel('Country')
ax.set_ylabel('Rate of a person dying from a dietary reason')
plt.show()
###Output
_____no_output_____ |
russian_language.ipynb | ###Markdown
###Code
!pip install spacy-udpipe
!pip install pymorphy2
!pip install nltk
!pip install -U pymorphy2-dicts-ru
import pymorphy2
import spacy_udpipe
from spacy.symbols import *
from nltk import Tree
def tok_format(t):
return f'{t.orth_}-{t.dep_}'
def to_nltk_tree(node):
if node.n_lefts + node.n_rights > 0:
return Tree(tok_format(node), [to_nltk_tree(child) for child in node.children])
else:
return tok_format(node)
def print_tree(node):
to_nltk_tree(node).pretty_print()
spacy_udpipe.download("ru") # download English model
nlp = spacy_udpipe.load("ru")
# nlp.add_pipe(nlp.create_pipe('sentencizer'))
morph = pymorphy2.MorphAnalyzer()
all_t = morph.parse('условия')
print(all_t)
p = all_t[0]
print(p.tag)
v = {p.tag.case}
print(v)
a = morph.parse('городской')[0]
a = a.inflect(v)
print(a)
%%time
text = '''
избрание и досрочное прекращение полномочий Единоличного исполнительного органа, Коллегиального исполнительного органа (Правления), утверждение условий договора с Единоличным исполнительным органом, членами Правления, принятие решения (в том числе об определении и порядке выплаты) о вознаграждении, поощрении, премировании, компенсациях и иных выплатах, в том числе стимулирующего и компенсационного характера Единоличному исполнительному органу и членам Правления (за исключением ключевых показателей эффективности (KPI), утверждение условий расторжения договора с Единоличным исполнительным органом, членами Правления, а также принятие решения о передаче полномочий Единоличного исполнительного органа коммерческой организации или индивидуальному предпринимателю (управляющему), утверждения такого управляющего и условий договора с ним;
'''
text = '''
Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 (Streetfighter II) центре боевой подготовки в городских условиях Коупхилл-Даун на военном полигоне в Солсбери (Англия), сообщает Jane's Defence Weekly. Соответствующий ролик выложен на YouTube.
Британский журнал отмечает, что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems, позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста.
'''
#TODO
s1 = 'одобрение сделок с недвижимостью'
s2 = 'утверждение сделок с имуществом'
s3 = 'одобрение сделок за исключением сделок с недвижимостью'
doc = nlp(text, disable=['ner'])
print('Sentences:'+'='*10)
for sent in doc.sents:
s = str(sent).strip()
print(s)
print('Tokens:'+'='*10)
for token in doc:
# print(f'{token.text}\t{token.lemma_}\t{token.pos_}\t{token.dep_}:{token.dep}')
s = ''
cc = 0
for t in token.subtree:
s += str(t)+' '
cc +=1
# if cc>1:
print(f'[{token.dep_}-{token.dep}-{token.pos_}--{token.text}]\t{s}')
if token.n_lefts + token.n_rights > 0:
print_tree(token)
np_labels_full = {nsubj, nsubjpass, dobj, iobj, pobj, csubj, csubjpass, attr,
obj, nmod} # obl, nmod, Probably others too
def iter_nps(doc):
for word in doc:
if word.dep in np_labels_full:
yield word
exclude_labels = {nmod, acl}
def iter_nps_str(doc):
s = ''
for np in iter_nps(doc):
excluded = set()
for child in np.children:
if child is not np and child.dep in exclude_labels:
excluded.add(child)
for t in child.subtree:
excluded.add(t)
elif child.dep_ == 8110129090154140942: # child.dep == case
excluded.add(child)
for t in np.subtree:
# if t.head.dep == nmod
if t not in excluded:
s += str(t)+' '
yield s.strip()
s = ''
print('='*20)
for np in iter_nps_str(doc):
print(np)
###Output
Sentences:==========
Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 (Streetfighter II) центре боевой подготовки в городских условиях Коупхилл-Даун на военном полигоне в Солсбери (Англия), сообщает Jane's Defence Weekly.
Соответствующий ролик выложен на YouTube.
Британский журнал отмечает, что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems, позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста.
Tokens:==========
[amod-402-ADJ--Сухопутные] Сухопутные
[nsubj-429-NOUN--войска] Сухопутные войска Великобритании
войска-nsubj
_____________|______________
Сухопутные-amod Великобритании-
nmod
[nmod-426-PROPN--Великобритании] Великобритании
[case-8110129090154140942-ADP--в] в
[obl-435-NOUN--декабре] в декабре 2019 года
декабре-obl
_________|___________
| года-nmod
| |
в-case 2019-nummod
[nummod-12837356684637874264-NUM--2019] 2019
[nmod-426-NOUN--года] 2019 года
года-nmod
|
2019-nummod
[ROOT-8206900633647566924-VERB--испытали] Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 ( Streetfighter II ) центре боевой подготовки в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия ) , сообщает Jane's Defence Weekly .
испытали-ROOT
___________________________________________________________________________________________________________________________________________________________________|________________________________________________________________________________________________________________________________________________
| | | центре-obl |
| | | _______________________________________________________________|__________________________________________ |
| | | одну-nummod:gov | условиях-nmod |
| | | | | __________________________________________|__________________________ |
| | | модификаций-nmod | | | | полигоне-nmod |
| | | ____________________________|_____________ | | | | __________________________|_____________ |
| | декабре-obl | танка-nmod | | | Коупхилл-appos | | Солсбери-nmod сообщает-paratax
| | | | | | | | | | | | is
| | _________|___________ | ______________________________|____________ | | | | | | _____________|______________ ___________|______________
| войска-nsubj | года-nmod | | | Streetfighter- подготовки-nmod | | Даун-appos | | | Англия-parataxis | Jane's-nsubj
| | | | | | | parataxis | | | | | | | | | |
| _____________|______________ | | | | | ____________|___________ | | | | | | | ______________|____________ | ______________|____________
.-punct Сухопутные-amod Великобритании- в-case 2019-nummod из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct боевой-amod в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct ,-punct Defence-flat: Weekly-flat:
nmod foreign foreign foreign
[nummod:gov-7321983856208901595-NUM--одну] одну из модификаций танка Challenger 2 ( Streetfighter II )
одну-nummod:gov
|
модификаций-nmod
____________________________|_____________
| танка-nmod
| ______________________________|____________
| | | Streetfighter-
| | | parataxis
| | | ____________|___________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[case-8110129090154140942-ADP--из] из
[nmod-426-NOUN--модификаций] из модификаций танка Challenger 2 ( Streetfighter II )
модификаций-nmod
____________________________|_____________
| танка-nmod
| ______________________________|____________
| | | Streetfighter-
| | | parataxis
| | | ____________|___________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[nmod-426-NOUN--танка] танка Challenger 2 ( Streetfighter II )
танка-nmod
______________________|____________
| | Streetfighter-
| | parataxis
| | ____________|___________
Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[flat:foreign-5926320208798651204-PROPN--Challenger] Challenger
[nummod-12837356684637874264-NUM--2] 2
[punct-445-PUNCT--(] (
[parataxis-436-PROPN--Streetfighter] ( Streetfighter II )
Streetfighter-
parataxis
__________|___________
(-punct II-nummod )-punct
[nummod-12837356684637874264-NUM--II] II
[punct-445-PUNCT--)] )
[obl-435-NOUN--центре] одну из модификаций танка Challenger 2 ( Streetfighter II ) центре боевой подготовки в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия )
центре-obl
_______________________________________________________________|__________________________________________
одну-nummod:gov | условиях-nmod
| | __________________________________________|__________________________
модификаций-nmod | | | | полигоне-nmod
____________________________|_____________ | | | | __________________________|_____________
| танка-nmod | | | Коупхилл-appos | | Солсбери-nmod
| ______________________________|____________ | | | | | | _____________|______________
| | | Streetfighter- подготовки-nmod | | Даун-appos | | | Англия-parataxis
| | | parataxis | | | | | | | |
| | | ____________|___________ | | | | | | | ______________|____________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct боевой-amod в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct
foreign
[amod-402-ADJ--боевой] боевой
[nmod-426-NOUN--подготовки] боевой подготовки
подготовки-nmod
|
боевой-amod
[case-8110129090154140942-ADP--в] в
[amod-402-ADJ--городских] городских
[nmod-426-NOUN--условиях] в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия )
условиях-nmod
________________________________________|__________________________
| | | полигоне-nmod
| | | __________________________|_____________
| | Коупхилл-appos | | Солсбери-nmod
| | | | | _____________|______________
| | Даун-appos | | | Англия-parataxis
| | | | | | ______________|____________
в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct
[appos-403-PROPN--Коупхилл] Коупхилл - Даун
Коупхилл-appos
|
Даун-appos
|
--punct
[punct-445-PUNCT---] -
[appos-403-PROPN--Даун] - Даун
Даун-appos
|
--punct
[case-8110129090154140942-ADP--на] на
[amod-402-ADJ--военном] военном
[nmod-426-NOUN--полигоне] на военном полигоне в Солсбери ( Англия )
полигоне-nmod
_______________________|_____________
| | Солсбери-nmod
| | _____________|______________
| | | Англия-parataxis
| | | ______________|____________
на-case военном-amod в-case (-punct )-punct
[case-8110129090154140942-ADP--в] в
[nmod-426-NOUN--Солсбери] в Солсбери ( Англия )
Солсбери-nmod
__________|______________
| Англия-parataxis
| ______________|____________
в-case (-punct )-punct
[punct-445-PUNCT--(] (
[parataxis-436-PROPN--Англия] ( Англия )
Англия-parataxis
___________|____________
(-punct )-punct
[punct-445-PUNCT--)] )
[punct-445-PUNCT--,] ,
[parataxis-436-VERB--сообщает] , сообщает Jane's Defence Weekly
сообщает-paratax
is
___________|______________
| Jane's-nsubj
| ______________|____________
,-punct Defence-flat: Weekly-flat:
foreign foreign
[nsubj-429-PROPN--Jane's] Jane's Defence Weekly
Jane's-nsubj
____________|____________
Defence-flat: Weekly-flat:
foreign foreign
[flat:foreign-5926320208798651204-PROPN--Defence] Defence
[flat:foreign-5926320208798651204-PROPN--Weekly] Weekly
[punct-445-PUNCT--.] .
[amod-402-ADJ--Соответствующий] Соответствующий
[nsubj:pass-7833439085008721140-NOUN--ролик] Соответствующий ролик
ролик-nsubj:pass
|
Соответствующий-
amod
[ROOT-8206900633647566924-VERB--выложен] Соответствующий ролик выложен на YouTube .
выложен-ROOT
___________|______________
| ролик-nsubj:pass YouTube-obl
| | |
.-punct Соответствующий- на-case
amod
[case-8110129090154140942-ADP--на] на
[obl-435-NOUN--YouTube] на YouTube
YouTube-obl
|
на-case
[punct-445-PUNCT--.] .
[amod-402-ADJ--Британский] Британский
[nsubj-429-NOUN--журнал] Британский журнал
журнал-nsubj
|
Британский-amod
[ROOT-8206900633647566924-VERB--отмечает] Британский журнал отмечает , что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста .
отмечает-ROOT
___________|_______________________________
| | получил-ccomp
| | ___________________|______________________________________________________________
| | | | | систему-obj
| | | | | _____________|______________________________
| | | | | | датчиков-nmod
| | | | | | ________________|_________________________________
| | | | | | | | компании-nmod
| | | | | | | | ________________|______________________________________
| | | | | | | | | | | позволяющую-acl
| | | | | | | | | | | ___________|_______________________________________
| | | | | распределенную- | инфракрасных- | | | | отображать-xcomp
| | | | | acl | amod | | | | |
| | | | | | | | | | | | _______________________________________|_______________________
| журнал-nsubj | | танк-nsubj корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | | | | | | х-conj | | | | | | |
| | | | ______________|____________ ___________|_____________ | | | | | | | __________|_____________ __________|______________
.-punct Британский-amod ,-punct что-mark городской-amod Streetfighter- II-nummod по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
flat:foreign foreign gn foreign
[punct-445-PUNCT--,] ,
[mark-423-SCONJ--что] что
[amod-402-ADJ--городской] городской
[nsubj-429-NOUN--танк] городской танк Streetfighter II
танк-nsubj
______________|____________
городской-amod Streetfighter- II-nummod
flat:foreign
[flat:foreign-5926320208798651204-PROPN--Streetfighter] Streetfighter
[nummod-12837356684637874264-NUM--II] II
[ccomp-408-VERB--получил] , что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
получил-ccomp
___________________|______________________________________________________________
| | | систему-obj
| | | _____________|______________________________
| | | | датчиков-nmod
| | | | ________________|_________________________________
| | | | | | компании-nmod
| | | | | | ________________|______________________________________
| | | | | | | | | позволяющую-acl
| | | | | | | | | ___________|_______________________________________
| | | распределенную- | инфракрасных- | | | | отображать-xcomp
| | | acl | amod | | | | |
| | | | | | | | | | _______________________________________|_______________________
| | танк-nsubj корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | | | | х-conj | | | | | | |
| | ______________|____________ ___________|_____________ | | | | | | | __________|_____________ __________|______________
,-punct что-mark городской-amod Streetfighter- II-nummod по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
flat:foreign foreign gn foreign
[acl-451-VERB--распределенную] распределенную по его корпусу
распределенную-
acl
|
корпусу-obl
___________|___________
по-case его-det
[case-8110129090154140942-ADP--по] по
[det-415-DET--его] его
[obl-435-NOUN--корпусу] по его корпусу
корпусу-obl
_________|_________
по-case его-det
[obj-434-NOUN--систему] распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
систему-obj
_____________|______________________________
| датчиков-nmod
| ________________|_________________________________
| | | компании-nmod
| | | ________________|______________________________________
| | | | | | позволяющую-acl
| | | | | | ___________|_______________________________________
распределенную- | инфракрасных- | | | | отображать-xcomp
acl | amod | | | | |
| | | | | | | _______________________________________|_______________________
корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | х-conj | | | | | | |
___________|_____________ | | | | | | | __________|_____________ __________|______________
по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
foreign gn foreign
[amod-402-ADJ--инфракрасных] инфракрасных и электрооптических
инфракрасных-
amod
|
электрооптически
х-conj
|
и-cc
[cc-407-CCONJ--и] и
[conj-410-ADJ--электрооптических] и электрооптических
электрооптически
х-conj
|
и-cc
[nmod-426-NOUN--датчиков] инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
датчиков-nmod
________________|_________________________________
| | компании-nmod
| | ________________|______________________________________
| | | | | позволяющую-acl
| | | | | ___________|_______________________________________
| инфракрасных- | | | | отображать-xcomp
| amod | | | | |
| | | | | | _______________________________________|_______________________
| электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| х-conj | | | | | | |
| | | | | | | __________|_____________ __________|______________
IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
foreign gn foreign
[flat:foreign-5926320208798651204-PROPN--IronVision] IronVision
[amod-402-ADJ--израильской] израильской
[nmod-426-NOUN--компании] израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
компании-nmod
________________|______________________________________
| | | позволяющую-acl
| | | ___________|_______________________________________
| | | | отображать-xcomp
| | | | _______________________________________|_______________________
| | | | обстановку-obj машины-obl дисплей-obl
| | | | | __________|_____________ __________|______________
израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
gn foreign
[flat:foreign-5926320208798651204-PROPN--Elbit] Elbit
[flat:foreign-5926320208798651204-PROPN--Systems] Systems
[punct-445-PUNCT--,] ,
[acl-451-VERB--позволяющую] , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
позволяющую-acl
___________|_______________________________________
| отображать-xcomp
| _______________________________________|_______________________
| обстановку-obj машины-obl дисплей-obl
| | __________|_____________ __________|______________
,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
[xcomp-450-VERB--отображать] отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
отображать-xcomp
_______________________________________|_______________________
обстановку-obj машины-obl дисплей-obl
| __________|_____________ __________|______________
оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
[amod-402-ADJ--оперативную] оперативную
[obj-434-NOUN--обстановку] оперативную обстановку
обстановку-obj
|
оперативную-amod
[case-8110129090154140942-ADP--вокруг] вокруг
[amod-402-ADJ--боевой] боевой
[obl-435-NOUN--машины] вокруг боевой машины
машины-obl
__________|___________
вокруг-case боевой-amod
[case-8110129090154140942-ADP--на] на
[amod-402-ADJ--нашлемный] нашлемный
[obl-435-NOUN--дисплей] на нашлемный дисплей танкиста
дисплей-obl
__________|______________
на-case нашлемный-amod танкиста-nmod
[nmod-426-NOUN--танкиста] танкиста
[punct-445-PUNCT--.] .
====================
Сухопутные войска
Великобритании
2019 года
из модификаций
танка Challenger 2 ( Streetfighter II )
боевой подготовки
в городских условиях Коупхилл - Даун
на военном полигоне
в Солсбери ( Англия )
Jane's Defence Weekly
Британский журнал
городской танк Streetfighter II
систему
инфракрасных и электрооптических датчиков IronVision
израильской компании Elbit Systems
оперативную обстановку
танкиста
###Markdown
###Code
!pip install spacy-udpipe
!pip install pymorphy2
!pip install nltk
!pip install -U pymorphy2-dicts-ru
import pymorphy2
import spacy_udpipe
from spacy.symbols import *
from nltk import Tree
def tok_format(t):
return f'{t.orth_}-{t.dep_}'
def to_nltk_tree(node):
if node.n_lefts + node.n_rights > 0:
return Tree(tok_format(node), [to_nltk_tree(child) for child in node.children])
else:
return tok_format(node)
def print_tree(node):
to_nltk_tree(node).pretty_print()
spacy_udpipe.download("ru") # download English model
nlp = spacy_udpipe.load("ru")
# nlp.add_pipe(nlp.create_pipe('sentencizer'))
morph = pymorphy2.MorphAnalyzer()
all_t = morph.parse('условия')
print(all_t)
p = all_t[0]
print(p.tag)
v = {p.tag.case}
print(v)
a = morph.parse('городской')[0]
a = a.inflect(v)
print(a)
%%time
text = '''
избрание и досрочное прекращение полномочий Единоличного исполнительного органа, Коллегиального исполнительного органа (Правления), утверждение условий договора с Единоличным исполнительным органом, членами Правления, принятие решения (в том числе об определении и порядке выплаты) о вознаграждении, поощрении, премировании, компенсациях и иных выплатах, в том числе стимулирующего и компенсационного характера Единоличному исполнительному органу и членам Правления (за исключением ключевых показателей эффективности (KPI), утверждение условий расторжения договора с Единоличным исполнительным органом, членами Правления, а также принятие решения о передаче полномочий Единоличного исполнительного органа коммерческой организации или индивидуальному предпринимателю (управляющему), утверждения такого управляющего и условий договора с ним;
'''
text = '''
Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 (Streetfighter II) центре боевой подготовки в городских условиях Коупхилл-Даун на военном полигоне в Солсбери (Англия), сообщает Jane's Defence Weekly. Соответствующий ролик выложен на YouTube.
Британский журнал отмечает, что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems, позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста.
'''
#TODO
s1 = 'одобрение сделок с недвижимостью'
s2 = 'утверждение сделок с имуществом'
s3 = 'одобрение сделок за исключением сделок с недвижимостью'
doc = nlp(text, disable=['ner'])
print('Sentences:'+'='*10)
for sent in doc.sents:
s = str(sent).strip()
print(s)
print('Tokens:'+'='*10)
for token in doc:
# print(f'{token.text}\t{token.lemma_}\t{token.pos_}\t{token.dep_}:{token.dep}')
s = ''
cc = 0
for t in token.subtree:
s += str(t)+' '
cc +=1
# if cc>1:
print(f'[{token.dep_}-{token.dep}-{token.pos_}--{token.text}]\t{s}')
if token.n_lefts + token.n_rights > 0:
print_tree(token)
np_labels_full = {nsubj, nsubjpass, dobj, iobj, pobj, csubj, csubjpass, attr,
obj, nmod} # obl, nmod, Probably others too
def iter_nps(doc):
for word in doc:
if word.dep in np_labels_full:
yield word
exclude_labels = {nmod, acl}
def iter_nps_str(doc):
s = ''
for np in iter_nps(doc):
excluded = set()
for child in np.children:
if child is not np and child.dep in exclude_labels:
excluded.add(child)
for t in child.subtree:
excluded.add(t)
elif child.dep_ == 8110129090154140942: # child.dep == case
excluded.add(child)
for t in np.subtree:
# if t.head.dep == nmod
if t not in excluded:
s += str(t)+' '
yield s.strip()
s = ''
print('='*20)
for np in iter_nps_str(doc):
print(np)
###Output
Sentences:==========
Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 (Streetfighter II) центре боевой подготовки в городских условиях Коупхилл-Даун на военном полигоне в Солсбери (Англия), сообщает Jane's Defence Weekly.
Соответствующий ролик выложен на YouTube.
Британский журнал отмечает, что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems, позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста.
Tokens:==========
[amod-402-ADJ--Сухопутные] Сухопутные
[nsubj-429-NOUN--войска] Сухопутные войска Великобритании
войска-nsubj
_____________|______________
Сухопутные-amod Великобритании-
nmod
[nmod-426-PROPN--Великобритании] Великобритании
[case-8110129090154140942-ADP--в] в
[obl-435-NOUN--декабре] в декабре 2019 года
декабре-obl
_________|___________
| года-nmod
| |
в-case 2019-nummod
[nummod-12837356684637874264-NUM--2019] 2019
[nmod-426-NOUN--года] 2019 года
года-nmod
|
2019-nummod
[ROOT-8206900633647566924-VERB--испытали] Сухопутные войска Великобритании в декабре 2019 года испытали одну из модификаций танка Challenger 2 ( Streetfighter II ) центре боевой подготовки в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия ) , сообщает Jane's Defence Weekly .
испытали-ROOT
___________________________________________________________________________________________________________________________________________________________________|________________________________________________________________________________________________________________________________________________
| | | центре-obl |
| | | _______________________________________________________________|__________________________________________ |
| | | одну-nummod:gov | условиях-nmod |
| | | | | __________________________________________|__________________________ |
| | | модификаций-nmod | | | | полигоне-nmod |
| | | ____________________________|_____________ | | | | __________________________|_____________ |
| | декабре-obl | танка-nmod | | | Коупхилл-appos | | Солсбери-nmod сообщает-paratax
| | | | | | | | | | | | is
| | _________|___________ | ______________________________|____________ | | | | | | _____________|______________ ___________|______________
| войска-nsubj | года-nmod | | | Streetfighter- подготовки-nmod | | Даун-appos | | | Англия-parataxis | Jane's-nsubj
| | | | | | | parataxis | | | | | | | | | |
| _____________|______________ | | | | | ____________|___________ | | | | | | | ______________|____________ | ______________|____________
.-punct Сухопутные-amod Великобритании- в-case 2019-nummod из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct боевой-amod в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct ,-punct Defence-flat: Weekly-flat:
nmod foreign foreign foreign
[nummod:gov-7321983856208901595-NUM--одну] одну из модификаций танка Challenger 2 ( Streetfighter II )
одну-nummod:gov
|
модификаций-nmod
____________________________|_____________
| танка-nmod
| ______________________________|____________
| | | Streetfighter-
| | | parataxis
| | | ____________|___________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[case-8110129090154140942-ADP--из] из
[nmod-426-NOUN--модификаций] из модификаций танка Challenger 2 ( Streetfighter II )
модификаций-nmod
____________________________|_____________
| танка-nmod
| ______________________________|____________
| | | Streetfighter-
| | | parataxis
| | | ____________|___________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[nmod-426-NOUN--танка] танка Challenger 2 ( Streetfighter II )
танка-nmod
______________________|____________
| | Streetfighter-
| | parataxis
| | ____________|___________
Challenger-flat: 2-nummod (-punct II-nummod )-punct
foreign
[flat:foreign-5926320208798651204-PROPN--Challenger] Challenger
[nummod-12837356684637874264-NUM--2] 2
[punct-445-PUNCT--(] (
[parataxis-436-PROPN--Streetfighter] ( Streetfighter II )
Streetfighter-
parataxis
__________|___________
(-punct II-nummod )-punct
[nummod-12837356684637874264-NUM--II] II
[punct-445-PUNCT--)] )
[obl-435-NOUN--центре] одну из модификаций танка Challenger 2 ( Streetfighter II ) центре боевой подготовки в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия )
центре-obl
_______________________________________________________________|__________________________________________
одну-nummod:gov | условиях-nmod
| | __________________________________________|__________________________
модификаций-nmod | | | | полигоне-nmod
____________________________|_____________ | | | | __________________________|_____________
| танка-nmod | | | Коупхилл-appos | | Солсбери-nmod
| ______________________________|____________ | | | | | | _____________|______________
| | | Streetfighter- подготовки-nmod | | Даун-appos | | | Англия-parataxis
| | | parataxis | | | | | | | |
| | | ____________|___________ | | | | | | | ______________|____________
из-case Challenger-flat: 2-nummod (-punct II-nummod )-punct боевой-amod в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct
foreign
[amod-402-ADJ--боевой] боевой
[nmod-426-NOUN--подготовки] боевой подготовки
подготовки-nmod
|
боевой-amod
[case-8110129090154140942-ADP--в] в
[amod-402-ADJ--городских] городских
[nmod-426-NOUN--условиях] в городских условиях Коупхилл - Даун на военном полигоне в Солсбери ( Англия )
условиях-nmod
________________________________________|__________________________
| | | полигоне-nmod
| | | __________________________|_____________
| | Коупхилл-appos | | Солсбери-nmod
| | | | | _____________|______________
| | Даун-appos | | | Англия-parataxis
| | | | | | ______________|____________
в-case городских-amod --punct на-case военном-amod в-case (-punct )-punct
[appos-403-PROPN--Коупхилл] Коупхилл - Даун
Коупхилл-appos
|
Даун-appos
|
--punct
[punct-445-PUNCT---] -
[appos-403-PROPN--Даун] - Даун
Даун-appos
|
--punct
[case-8110129090154140942-ADP--на] на
[amod-402-ADJ--военном] военном
[nmod-426-NOUN--полигоне] на военном полигоне в Солсбери ( Англия )
полигоне-nmod
_______________________|_____________
| | Солсбери-nmod
| | _____________|______________
| | | Англия-parataxis
| | | ______________|____________
на-case военном-amod в-case (-punct )-punct
[case-8110129090154140942-ADP--в] в
[nmod-426-NOUN--Солсбери] в Солсбери ( Англия )
Солсбери-nmod
__________|______________
| Англия-parataxis
| ______________|____________
в-case (-punct )-punct
[punct-445-PUNCT--(] (
[parataxis-436-PROPN--Англия] ( Англия )
Англия-parataxis
___________|____________
(-punct )-punct
[punct-445-PUNCT--)] )
[punct-445-PUNCT--,] ,
[parataxis-436-VERB--сообщает] , сообщает Jane's Defence Weekly
сообщает-paratax
is
___________|______________
| Jane's-nsubj
| ______________|____________
,-punct Defence-flat: Weekly-flat:
foreign foreign
[nsubj-429-PROPN--Jane's] Jane's Defence Weekly
Jane's-nsubj
____________|____________
Defence-flat: Weekly-flat:
foreign foreign
[flat:foreign-5926320208798651204-PROPN--Defence] Defence
[flat:foreign-5926320208798651204-PROPN--Weekly] Weekly
[punct-445-PUNCT--.] .
[amod-402-ADJ--Соответствующий] Соответствующий
[nsubj:pass-7833439085008721140-NOUN--ролик] Соответствующий ролик
ролик-nsubj:pass
|
Соответствующий-
amod
[ROOT-8206900633647566924-VERB--выложен] Соответствующий ролик выложен на YouTube .
выложен-ROOT
___________|______________
| ролик-nsubj:pass YouTube-obl
| | |
.-punct Соответствующий- на-case
amod
[case-8110129090154140942-ADP--на] на
[obl-435-NOUN--YouTube] на YouTube
YouTube-obl
|
на-case
[punct-445-PUNCT--.] .
[amod-402-ADJ--Британский] Британский
[nsubj-429-NOUN--журнал] Британский журнал
журнал-nsubj
|
Британский-amod
[ROOT-8206900633647566924-VERB--отмечает] Британский журнал отмечает , что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста .
отмечает-ROOT
___________|_______________________________
| | получил-ccomp
| | ___________________|______________________________________________________________
| | | | | систему-obj
| | | | | _____________|______________________________
| | | | | | датчиков-nmod
| | | | | | ________________|_________________________________
| | | | | | | | компании-nmod
| | | | | | | | ________________|______________________________________
| | | | | | | | | | | позволяющую-acl
| | | | | | | | | | | ___________|_______________________________________
| | | | | распределенную- | инфракрасных- | | | | отображать-xcomp
| | | | | acl | amod | | | | |
| | | | | | | | | | | | _______________________________________|_______________________
| журнал-nsubj | | танк-nsubj корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | | | | | | х-conj | | | | | | |
| | | | ______________|____________ ___________|_____________ | | | | | | | __________|_____________ __________|______________
.-punct Британский-amod ,-punct что-mark городской-amod Streetfighter- II-nummod по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
flat:foreign foreign gn foreign
[punct-445-PUNCT--,] ,
[mark-423-SCONJ--что] что
[amod-402-ADJ--городской] городской
[nsubj-429-NOUN--танк] городской танк Streetfighter II
танк-nsubj
______________|____________
городской-amod Streetfighter- II-nummod
flat:foreign
[flat:foreign-5926320208798651204-PROPN--Streetfighter] Streetfighter
[nummod-12837356684637874264-NUM--II] II
[ccomp-408-VERB--получил] , что городской танк Streetfighter II получил распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
получил-ccomp
___________________|______________________________________________________________
| | | систему-obj
| | | _____________|______________________________
| | | | датчиков-nmod
| | | | ________________|_________________________________
| | | | | | компании-nmod
| | | | | | ________________|______________________________________
| | | | | | | | | позволяющую-acl
| | | | | | | | | ___________|_______________________________________
| | | распределенную- | инфракрасных- | | | | отображать-xcomp
| | | acl | amod | | | | |
| | | | | | | | | | _______________________________________|_______________________
| | танк-nsubj корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | | | | х-conj | | | | | | |
| | ______________|____________ ___________|_____________ | | | | | | | __________|_____________ __________|______________
,-punct что-mark городской-amod Streetfighter- II-nummod по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
flat:foreign foreign gn foreign
[acl-451-VERB--распределенную] распределенную по его корпусу
распределенную-
acl
|
корпусу-obl
___________|___________
по-case его-det
[case-8110129090154140942-ADP--по] по
[det-415-DET--его] его
[obl-435-NOUN--корпусу] по его корпусу
корпусу-obl
_________|_________
по-case его-det
[obj-434-NOUN--систему] распределенную по его корпусу систему инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
систему-obj
_____________|______________________________
| датчиков-nmod
| ________________|_________________________________
| | | компании-nmod
| | | ________________|______________________________________
| | | | | | позволяющую-acl
| | | | | | ___________|_______________________________________
распределенную- | инфракрасных- | | | | отображать-xcomp
acl | amod | | | | |
| | | | | | | _______________________________________|_______________________
корпусу-obl | электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| | х-conj | | | | | | |
___________|_____________ | | | | | | | __________|_____________ __________|______________
по-case его-det IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
foreign gn foreign
[amod-402-ADJ--инфракрасных] инфракрасных и электрооптических
инфракрасных-
amod
|
электрооптически
х-conj
|
и-cc
[cc-407-CCONJ--и] и
[conj-410-ADJ--электрооптических] и электрооптических
электрооптически
х-conj
|
и-cc
[nmod-426-NOUN--датчиков] инфракрасных и электрооптических датчиков IronVision израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
датчиков-nmod
________________|_________________________________
| | компании-nmod
| | ________________|______________________________________
| | | | | позволяющую-acl
| | | | | ___________|_______________________________________
| инфракрасных- | | | | отображать-xcomp
| amod | | | | |
| | | | | | _______________________________________|_______________________
| электрооптически | | | | обстановку-obj машины-obl дисплей-obl
| х-conj | | | | | | |
| | | | | | | __________|_____________ __________|______________
IronVision-flat: и-cc израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
foreign gn foreign
[flat:foreign-5926320208798651204-PROPN--IronVision] IronVision
[amod-402-ADJ--израильской] израильской
[nmod-426-NOUN--компании] израильской компании Elbit Systems , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
компании-nmod
________________|______________________________________
| | | позволяющую-acl
| | | ___________|_______________________________________
| | | | отображать-xcomp
| | | | _______________________________________|_______________________
| | | | обстановку-obj машины-obl дисплей-obl
| | | | | __________|_____________ __________|______________
израильской-amod Elbit-flat:forei Systems-flat: ,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
gn foreign
[flat:foreign-5926320208798651204-PROPN--Elbit] Elbit
[flat:foreign-5926320208798651204-PROPN--Systems] Systems
[punct-445-PUNCT--,] ,
[acl-451-VERB--позволяющую] , позволяющую отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
позволяющую-acl
___________|_______________________________________
| отображать-xcomp
| _______________________________________|_______________________
| обстановку-obj машины-obl дисплей-obl
| | __________|_____________ __________|______________
,-punct оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
[xcomp-450-VERB--отображать] отображать оперативную обстановку вокруг боевой машины на нашлемный дисплей танкиста
отображать-xcomp
_______________________________________|_______________________
обстановку-obj машины-obl дисплей-obl
| __________|_____________ __________|______________
оперативную-amod вокруг-case боевой-amod на-case нашлемный-amod танкиста-nmod
[amod-402-ADJ--оперативную] оперативную
[obj-434-NOUN--обстановку] оперативную обстановку
обстановку-obj
|
оперативную-amod
[case-8110129090154140942-ADP--вокруг] вокруг
[amod-402-ADJ--боевой] боевой
[obl-435-NOUN--машины] вокруг боевой машины
машины-obl
__________|___________
вокруг-case боевой-amod
[case-8110129090154140942-ADP--на] на
[amod-402-ADJ--нашлемный] нашлемный
[obl-435-NOUN--дисплей] на нашлемный дисплей танкиста
дисплей-obl
__________|______________
на-case нашлемный-amod танкиста-nmod
[nmod-426-NOUN--танкиста] танкиста
[punct-445-PUNCT--.] .
====================
Сухопутные войска
Великобритании
2019 года
из модификаций
танка Challenger 2 ( Streetfighter II )
боевой подготовки
в городских условиях Коупхилл - Даун
на военном полигоне
в Солсбери ( Англия )
Jane's Defence Weekly
Британский журнал
городской танк Streetfighter II
систему
инфракрасных и электрооптических датчиков IronVision
израильской компании Elbit Systems
оперативную обстановку
танкиста
|
summary worksheet/.ipynb_checkpoints/Pandas_Intro-checkpoint.ipynb | ###Markdown
candybars = pd.read_excel('foods.xlsx', sheet_name='chocolate')candybars
###Code
cereal_data = pd.read_csv('cereal.csv', index_col="name") # To use this in Jupyter lab, the data file must be in the same directory as the ipynb
cereal_data.head() # returns the first entries in the data fram according to the arguement
cereal_data.shape #prints the number of rows, cols
###Output
_____no_output_____ |
Pandas Recipes 1.0 Pandas Foundations.ipynb | ###Markdown
Drop NA
###Code
director_noNA = director.dropna()
print("Size of Original Data ",director.size)
print("Count of Original Data ",director.count())
print("Data has NaN's director.hasnans ",director.hasnans)
print()
print("Size of Data after removing NAs ",director_noNA.size)
print("Count of Data after removing NAs ",director_noNA.count())
print("Data has NaN's director.hasnans ",director_noNA.hasnans)
###Output
Size of Original Data 4916
Count of Original Data 4814
Data has NaN's director.hasnans True
Size of Data after removing NAs 4814
Count of Data after removing NAs 4814
Data has NaN's director.hasnans False
###Markdown
Replace NA's
###Code
director_ReplaceNAs = director.fillna(0)
print("Size of Original Data ",director.size)
print("Count of Original Data ",director.count())
print("Data has NaN's director.hasnans ",director.hasnans)
print()
print("Count of Data after removing NAs ",director_ReplaceNAs.count())
print("Size of Data after removing NAs ",director_ReplaceNAs.size)
print("Data has NaN's director_noNA.hasnans ",director_noNA.hasnans)
###Output
Size of Original Data 4916
Count of Original Data 4814
Data has NaN's director.hasnans True
Count of Data after removing NAs 4916
Size of Data after removing NAs 4916
Data has NaN's director_noNA.hasnans False
|
AV/Flights-Copy1.ipynb | ###Markdown
Cheating Kaggle
###Code
!dir {PATH}
sub_df1 = pd.read_csv(f'{PATH}cat_encoded_flights_ods.csv');
sub_df2 = pd.read_csv(f'{PATH}cat_encoded_flights_ods.csv');
preds = (sub_df1.dep_delayed_15min * 1.3 + sub_df2.dep_delayed_15min*2.5)/4
max(preds)
preds1 = np.where(preds>.80, .84, preds)
sub_df['dep_delayed_15min'] = preds
sub_df.to_csv(f'{PATH}modified_flights.csv',index=None,)
###Output
_____no_output_____ |
how-to-use-azureml/explain-model/explain-run-history-sklearn-classification/explain-run-history-sklearn-classification.ipynb | ###Markdown
Breast cancer diagnosis classification with scikit-learn (save model explanations via AML Run History) Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with AML Run History, which leverages run history service to store and manage the explanation data
###Code
from sklearn.datasets import load_breast_cancer
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
breast_cancer_data = load_breast_cancer()
classes = breast_cancer_data.target_names.tolist()
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(breast_cancer_data.data, breast_cancer_data.target, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features=breast_cancer_data.feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
###Output
_____no_output_____
###Markdown
2. Save Model Explanation With AML Run History
###Code
import azureml.core
from azureml.core import Workspace, Experiment, Run
from azureml.explain.model.tabular_explainer import TabularExplainer
from azureml.contrib.explain.model.explanation.explanation_client import ExplanationClient
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
experiment_name = 'explain_model'
experiment = Experiment(ws, experiment_name)
run = experiment.start_logging()
client = ExplanationClient.from_run(run)
# Uploading model explanation data for storage or visualization in webUX
# The explanation can then be downloaded on any compute
client.upload_model_explanation(global_explanation)
# Get model explanation data
explanation = client.download_model_explanation()
local_importance_values = explanation.local_importance_values
expected_values = explanation.expected_values
# Get the top k (e.g., 4) most important features with their importance values
explanation = client.download_model_explanation(top_k=4)
global_importance_values = explanation.get_ranked_global_values()
global_importance_names = explanation.get_ranked_global_names()
per_class_names = explanation.get_ranked_per_class_names()[0]
per_class_values = explanation.get_ranked_per_class_values()[0]
print('per class feature importance values: {}'.format(per_class_values))
print('per class feature importance names: {}'.format(per_class_names))
dict(zip(per_class_names, per_class_values))
###Output
_____no_output_____ |
hw_rel_ext.ipynb | ###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baseline](Baseline)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baseline
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
PATH_TO_DATA = '/Users/pierrejaumier/Data/cs224u'
rel_ext_data_home = os.path.join(PATH_TO_DATA, 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.827 0.365 0.660 340 5716
author 0.799 0.548 0.732 509 5885
capital 0.654 0.179 0.427 95 5471
contains 0.796 0.599 0.747 3904 9280
film_performance 0.784 0.564 0.727 766 6142
founders 0.802 0.395 0.665 380 5756
genre 0.628 0.159 0.395 170 5546
has_sibling 0.920 0.230 0.576 499 5875
has_spouse 0.885 0.323 0.657 594 5970
is_a 0.676 0.231 0.489 497 5873
nationality 0.584 0.173 0.396 301 5677
parents 0.862 0.519 0.761 312 5688
place_of_birth 0.704 0.215 0.484 233 5609
place_of_death 0.472 0.107 0.281 159 5535
profession 0.645 0.162 0.404 247 5623
worked_at 0.677 0.269 0.519 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.732 0.315 0.557 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.552 Córdoba
2.476 Valais
2.433 Taluks
..... .....
-1.151 Caribbean
-1.408 Spain
-1.419 America
Highest and lowest feature weights for relation author:
2.559 author
2.539 novel
2.300 by
..... .....
-2.010 or
-2.071 much
-3.069 1774
Highest and lowest feature weights for relation capital:
3.119 capital
1.969 city
1.661 especially
..... .....
-1.192 and
-1.290 Westminster
-1.589 borough
Highest and lowest feature weights for relation contains:
2.259 suburb
2.148 attended
2.126 lies
..... .....
-2.202 band
-2.286 who
-2.320 recorded
Highest and lowest feature weights for relation film_performance:
4.160 starring
4.148 opposite
3.851 co-starring
..... .....
-2.000 Wonderland
-2.068 comedian
-2.273 Khakee
Highest and lowest feature weights for relation founders:
4.031 founded
3.791 co-founder
3.515 founder
..... .....
-1.535 band
-1.751 philosopher
-1.827 top
Highest and lowest feature weights for relation genre:
3.034 series
2.635 album
2.569 movie
..... .....
-1.449 and
-1.454 ;
-1.767 at
Highest and lowest feature weights for relation has_sibling:
5.009 brother
3.761 sister
2.905 Marlon
..... .....
-1.364 from
-1.495 President
-1.524 he
Highest and lowest feature weights for relation has_spouse:
5.390 wife
4.507 husband
4.482 widow
..... .....
-1.364 on
-1.375 IV
-1.485 engineer
Highest and lowest feature weights for relation is_a:
3.087 family
2.718 genus
2.506 vocalist
..... .....
-1.557 on
-1.563 now
-1.930 mostly
Highest and lowest feature weights for relation nationality:
2.709 born
1.934 caliph
1.907 July
..... .....
-1.502 state
-1.719 American
-1.746 U.S.
Highest and lowest feature weights for relation parents:
5.152 son
4.581 daughter
4.065 father
..... .....
-1.533 Oscar
-1.916 played
-1.975 winner
Highest and lowest feature weights for relation place_of_birth:
3.934 born
3.317 birthplace
2.478 mayor
..... .....
-1.469 and
-1.492 Westminster
-1.534 province
Highest and lowest feature weights for relation place_of_death:
2.410 died
1.825 where
1.677 rebuilt
..... .....
-1.208 and
-1.242 that
-1.987 Westminster
Highest and lowest feature weights for relation profession:
2.933
2.514 vocalist
2.345 American
..... .....
-1.222 in
-1.313 Texas
-2.148 on
Highest and lowest feature weights for relation worked_at:
3.356 professor
3.147 CEO
2.794 employee
..... .....
-1.240 end
-1.507 war
-1.674 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.869 0.450 0.733 340 5716
author 0.848 0.440 0.716 509 5885
capital 0.579 0.232 0.445 95 5471
contains 0.650 0.407 0.581 3904 9280
film_performance 0.825 0.332 0.636 766 6142
founders 0.827 0.226 0.540 380 5756
genre 0.545 0.071 0.233 170 5546
has_sibling 0.862 0.238 0.566 499 5875
has_spouse 0.904 0.365 0.698 594 5970
is_a 0.731 0.153 0.416 497 5873
nationality 0.637 0.193 0.436 301 5677
parents 0.903 0.417 0.732 312 5688
place_of_birth 0.628 0.210 0.450 233 5609
place_of_death 0.462 0.113 0.286 159 5535
profession 0.644 0.154 0.393 247 5623
worked_at 0.699 0.269 0.529 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.726 0.267 0.524 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
model_factory_svc = lambda: SVC(kernel='linear')
return rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory_svc,
verbose=True)
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ and 1 == 2:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
haystack = word + subject_object_suffix
feature_counter[haystack] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
haystack = word + object_subject_suffix
feature_counter[haystack] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in get_tag_bigrams(ex.middle_POS):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in get_tag_bigrams(ex.middle_POS):
feature_counter[word] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = get_tags(s)
return_list = []
for i in range(len(tags)):
if i == 0:
return_list.append(start_symbol + ' ' + tags[i])
elif i == len(tags) - 1:
prev_i = i - 1
return_list.append(tags[prev_i] + ' ' + tags[i])
return_list.append(tags[i] + ' ' + end_symbol)
else:
prev_i = i - 1
return_list.append(tags[prev_i] + ' ' + tags[i])
return return_list
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
import nltk
nltk.download('wordnet')
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in get_synsets(ex.middle_POS):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in get_synsets(ex.middle_POS):
feature_counter[word] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
return_list = []
##### YOUR CODE HERE
for i in range(len(wt)):
text = wt[i][0]
tag = convert_tag(wt[i][1])
for x in wn.synsets(text, pos=tag):
return_list.append(str(x))
return return_list
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
import sklearn.linear_model
import sklearn.svm
from sklearn.neighbors import NearestCentroid
SGD_factory = lambda: sklearn.linear_model.SGDClassifier(loss='hinge')
challenger_factory = lambda: NearestCentroid() #sklearn.svm.LinearSVC(loss='hinge')
#SGD_factory, default loss function: 0.611
#SGD_factory, log loss function: 0.581
#SGD_factory, modified_huber loss function: 0.588
#SGD_factory, squared_hinge loss function: 0.456
#SGD_factory, perceptron loss function: 0.502
#SGD_factory, squared_loss loss function: 0.109
#SGD_factory, huber loss function: 0.397
#SGD_factory, epsilon_insensitive loss function: 0.5
#SGD_factory, squared_epsilon_insensitive loss function: 0.089
def directional_bag_of_words_featurizer_original_system(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
haystack = word + subject_object_suffix
feature_counter[haystack] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
haystack = word + object_subject_suffix
feature_counter[haystack] += 1
return feature_counter
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer_original_system],
model_factory=SGD_factory,
verbose=True)
# STOP COMMENT: Please do not remove this comment.
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.820 0.403 0.680 340 5716
author 0.808 0.676 0.777 509 5885
capital 0.556 0.263 0.455 95 5471
contains 0.843 0.623 0.788 3904 9280
film_performance 0.829 0.689 0.797 766 6142
founders 0.807 0.450 0.696 380 5756
genre 0.697 0.271 0.530 170 5546
has_sibling 0.922 0.238 0.586 499 5875
has_spouse 0.905 0.337 0.677 594 5970
is_a 0.714 0.276 0.542 497 5873
nationality 0.718 0.186 0.457 301 5677
parents 0.875 0.561 0.787 312 5688
place_of_birth 0.763 0.249 0.540 233 5609
place_of_death 0.568 0.157 0.373 159 5535
profession 0.809 0.291 0.597 247 5623
worked_at 0.723 0.281 0.550 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.772 0.372 0.614 9248 95264
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
corpus.examples[0].__repr__()
kb.get_triples_for_relation("adjoins")[0:5]
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions The simple_bag_of_words_featurizer below is a function that is passed to the experiment method. The experiment method calls the train models method which itself calls the featurizer method that uses this feautrizer as an inut as to how to featurize the input. Note that the elements of the featurizer are: a kb (a KB triple), the corpus and a feature counterFor a given triple (relation, subject, object), the simple featurizer looks for all the examples in the corpus (in both orders). The matched corpus example is splitted into words whicha re added the feature counter dictionary counts
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.861 0.365 0.677 340 5716
author 0.799 0.540 0.729 509 5885
capital 0.485 0.168 0.352 95 5471
contains 0.788 0.604 0.743 3904 9280
film_performance 0.766 0.569 0.717 766 6142
founders 0.819 0.405 0.680 380 5756
genre 0.500 0.147 0.338 170 5546
has_sibling 0.878 0.244 0.578 499 5875
has_spouse 0.915 0.325 0.671 594 5970
is_a 0.744 0.233 0.517 497 5873
nationality 0.593 0.179 0.406 301 5677
parents 0.869 0.532 0.771 312 5688
place_of_birth 0.685 0.215 0.476 233 5609
place_of_death 0.442 0.119 0.287 159 5535
profession 0.671 0.206 0.463 247 5623
worked_at 0.713 0.256 0.525 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.720 0.319 0.558 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
#code to check for examples of specific relations
for i in kb.get_triples_for_relation('adjoins')[:]:
for example in corpus.get_examples_for_entities(i.sbj,i.obj)[0:2]:
print(example.mention_1 + ' ' + example.middle + ' ' + example.mention_2)
###Output
France , Sweden , Spain
France nor Spain
Thailand , Vietnam , and Laos
Thailand , tomoi from Malaysia , muay Lao from Laos
Alberta and the Northwest Territories
Alberta , the Northwest Territories
Kilkenny , Laois
Tianjin and the provinces of Hebei
Bavaria , and Fahrzeugfabrik Eisenach in Thuringia
Bavaria , and Fahrzeugfabrik Eisenach in Thuringia
Hispaniola and Cuba
Hispaniola , Puerto Rico and Cuba
Libya , Egypt
Libya and Egypt
Jordan , Kuwait , Saudi Arabia
Jordan , and Saudi Arabia
Montana , bordering the Canadian provinces of Alberta
Montana and Alberta
East River . In 1874 , the western portion of the present Bronx County
Honduras , El Salvador , Nicaragua
Honduras , Mexico , Nicaragua
Haryana , Punjab and Rajasthan
Haryana and Rajasthan
Lambeth , Southwark
Lambeth , Southwark
Canada , United States of America
Canada , Germany , and the United States of America
Salta and Jujuy
Salta and Jujuy
France and Belgium
France , Belgium
Afghanistan , Tajikistan
Afghanistan , Tajikistan
North , West and Central Africa
North , West and Central Africa
Spain , he completed four voyages across the Atlantic Ocean
Spain from Morocco , and the Atlantic Ocean
Oklahoma ( 1902 ) , and New Mexico
Oklahoma and New Mexico
France and Germany
France , Germany
Montenegro to the west ; its border with Albania
Montenegro to the west ; additionally , it borders Albania
Northern Territory and two in Western Australia
Northern Territory or Western Australia
Golden Gate Bridge , San Francisco Bay and the Pacific Ocean
Sierra Leone . It is a major port city in the Atlantic Ocean
Australia , Papua New Guinea
Australia , Papua New Guinea
Vermont , New Hampshire
Vermont , and New Hampshire
Erie , and borders the State of Michigan
Lake Erie , and borders the State of Michigan
Narayani and Janakpur . Kathmandu is located in the Bagmati Zone
Narayani and Janakpur . Kathmandu is located in the Bagmati Zone
Los Angeles County into this area of San Bernardino County
Iraq , and in varying degrees in Iran ( Persia )
Iraq , and Iran
Uganda , Tanzania
Uganda , Tanzania
Punjab ) and Ajmer ( Rajasthan
Punjab , Rajasthan
Moldova , Romania
Moldova , Romania
Djibouti , Eritrea
Djibouti , Ethiopia , Eritrea
Alabama , Florida , Georgia
Lithuania and Poland
Lithuania , Latvia and the northeastern Suwalki region of Poland
Chennai , Kanchipuram
Chennai , Tiruvallur and Kanchipuram
Democratic Republic of the Congo and the Central African Republic
Democratic Republic of the Congo and the Central African Republic
Tennessee , Arkansas , and Missouri
Tennessee , Missouri
Bay , then up the Golden Gate
Bay , then up the Golden Gate
United States Virgin Islands and the British Virgin Islands
United States Virgin Islands and the British Virgin Islands
Pacific Ocean from Mexico 's Baja California peninsula
Alabama , Georgia , and Tennessee
Alabama , Tennessee
Gdynia , Poland in 1951 , raised in nearby Sopot
United Arab Emirates , and Oman
United Arab Emirates and Musandam , an exclave of Oman
Spain and Portugal
Spain , northern Portugal
Islington , Haringey and Hackney
London Borough of Islington and the London Borough of Hackney
Democratic Republic of the Congo ( DRC ) and Uganda
Democratic Republic of the Congo to the west , Uganda
Sonoma and Marin
Albania , Russia , the Republic of Macedonia
Albania and Republic of Macedonia
Friar Park home in Henley-on-Thames
Friar Park in Henley-on-Thames
Vanuatu to the west , France 's New Caledonia
Vanuatu and New Caledonia
contiguous United States , Alaska , Canada
Overijssel ( in 1986 to Flevoland
Overijssel ( in 1986 to Flevoland
Waterford , Wexford
Arctic Ocean . On the east it is connected with the Atlantic Ocean
Arctic and Atlantic Oceans
Prince Edward Island with mainland New Brunswick
Prince Edward Island , and portions of New Brunswick
Oval Office , Cabinet Room
Guyana , Suriname
Guyana , Suriname
Clare , Trinity Hall
Coquitlam and Port Coquitlam , British Columbia
Coquitlam , Port Coquitlam
Prypiat which sits in the shadow of Chernobyl Nuclear Power Plant
Prypiat which sits in the shadow of Chernobyl Nuclear Power Plant
Kentucky to the west ; and by West Virginia
Kentucky , southern Virginia , and western West Virginia
Savannakhet . To the west is Thailand ; to the east , Vietnam
Saudi Arabia , Oman
Saudi Arabia , to the west by the Red Sea and to the east by Oman
Brazil , Paraguay
Brazil , and Paraguay
New Caledonia , Solomon Islands and Vanuatu
New Caledonia and Vanuatu
Queensland in 1859 . The Northern Territory
Queensland or the Northern Territory
Oklahoma panhandle , present-day Colorado
Oklahoma , Colorado
Pennine Alps in the canton of Valais . The section of the Bernese Alps
Myanmar and China
Myanmar , central China
Republic of Benin in 1960 ( pop . 8.4M , 2005 est . ) and Republic of Niger
Benin , Cote d'Ivoire and Niger
Hesse , Thuringia , and Bavaria
Hesse and Bavaria
Barnet , Haringey
Coahuila , just south of Texas
Coahuila , Nuevo Santander and Texas
NW postal code area plus HA
English Bay , Burrard Inlet
United Kingdom and Ireland
United Kingdom and the Republic of Ireland
San Francisco , and Barbara Boxer , a former congresswoman from Marin County
Golden Gate , the opening of the San Francisco Bay
Golden Gate , the opening of the San Francisco Bay
Tiruvallur , near Chennai
Tiruvallur , near Chennai
Waterford and Tipperary
Waterford and Tipperary
Kuwait , and Iraq
Kuwait and Iraq
Nova Scotia , and New Brunswick
Nova Scotia , New Brunswick
New Caledonia and Tonga , and southward to northern Australia
New Caledonia , Papua New Guinea and coastal Eastern Australia
West Punjab and Sindh
West Punjab , Sindh
Lithuania , adjacent to East Prussia
Lithuania , Latvia and East Prussia
Russia , and Poland
Russia , Italy , Poland
Pacific Northwest . British Columbia is bordered by the Pacific Ocean
Chaco , Córdoba , Corrientes
Guinea , Mauritania , Sierra Leone
Guinea converted many Temne of northern Sierra Leone
Costa Rica , and Nicaragua
Costa Rica and Nicaragua
Lower Saxony , Hesse
East and Southern Africa
east , central and southern Africa
Netherlands by a cable across the North Sea
Netherlands , along the North Sea
Tofino and the Pacific Rim National Park Reserve
Brooklyn , grew up in Staten Island
Brooklyn and Staten Island
Sindh , Pakistan , western India
Sindh . It is also spoken in India
Tatarstan , Chuvashia
Tatarstan and Chuvashia
Tennessee , and Alabama
Tennessee , Alabama
Connecticut , Massachusetts
Connecticut , and Massachusetts
Oregon and Washington
Oregon , Washington
Wisconsin and Michigan
Wisconsin and Michigan
Abruzzo and Lazio
Uganda as a refugee from the Democratic Republic of Congo
Uganda , as well as the Democratic Republic of the Congo
Serbia , Bulgaria , Bosnia and Herzegovina
Serbia , Croatia , Bosnia
Salta , Santiago del Estero
Tiberias and the Sea of Galilee
Tiberias , a dump of a city on the Sea of Galilee
southern , eastern and central Africa
Southern , Central
Putney on the River Thames
Putney on the River Thames
Montana and northeast Wyoming
Montana , and Wyoming
Faridabad ) , and the Parklands Shop-In Park ( North Delhi
New Westminster , Burnaby
San Francisco Bay , about 8 miles ( 13 km ) east of San Francisco
San Francisco Bay , 1.5 miles ( 2.4 km ) offshore from San Francisco
Poland , Czech Republic
Poland and Czech Republic
Springfield , which later evolved into West Springfield
Ivory Coast , Senegal , Burkina Faso
Brandenburg , Mecklenburg-Vorpommern , Saxony , Saxony-Anhalt
Brandenburg , Mecklenburg-Vorpommern , Saxony , Saxony-Anhalt
Australia and New Zealand
Australia , New Zealand
New York , Connecticut , Massachusetts
New York , Massachusetts
Brazil , Argentina
Brazil west to Peru and south to southern Argentina
Pennsylvania and Ontario
Pennsylvania , New York and Ontario
Australian Capital Territory , New South Wales
Australian Capital Territory / New South Wales
Connecticut to the south , New York
Connecticut , New York
Golden Gate Bridge and the Marin Headlands
Golden Gate Bridge , and drove up into the Marin Headlands
Croatia , Serbia
Croatia , Bosnia and Herzegovina , Serbia
London Borough of Barnet , western part is the London Borough of Brent
London Borough of Barnet , western part is the London Borough of Brent
Spain , France
Spain . Other records companies in France
Pennsylvania and Virginia to Philadelphia , New York
Pennsylvania , New York
Pacific coast between Washington
Croatia , Republic of Macedonia , Montenegro
Croatia , Greece , Kosovo , Macedonia , Montenegro
Minnesota ; the central/northern portions of Manitoba
Minnesota , ( USA ) and Manitoba
Manyara Region ) . Kilimanjaro ( in Kilimanjaro Region
Jodhpur and Bikaner
Jodhpur and Bikaner
Ethiopia , Sudan
Ethiopia to the north , Sudan
British Columbia , Ontario , Yukon Territories , Northwest Territories
British Columbia to the west and Saskatchewan to the east , Northwest Territories
northeastern United States and southeastern Canada
northeastern United States and southeastern Canada
Ontario , Canada Origin Winnipeg
Niger , Burkina Faso , and southern Algeria
Niger , Nigeria , Mali , Algeria
San Francisco Bay to the east , and -- across the Golden Gate
San Francisco Bay and mainly the Pacific Ocean through the Golden Gate
North Lanarkshire , Scotland , south east of Glasgow
North Lanarkshire , outwith Glasgow
Belarus and Ukraine
Belarus , Russia and Ukraine
Montenegro and Bosnia and Herzegovina
Montenegro , Bosnia and Herzegovina
Fiji and probably also Vanuatu
Fiji and Vanuatu
Togo , and Benin
Togo , Benin
Morocco and the Western Sahara
Morocco with Western Sahara
Oklahoma , Missouri , Kansas
Oklahoma 5 , Kansas
Albania , Greece
Albania and Greece
Buenos Aires province ( a territory separate from Buenos Aires city
Buenos Aires province and the city of Buenos Aires
Papua New Guinea which was administered by Australia
Papua New Guinea , northern and eastern Australia
Azerbaijan , Armenia , Georgia , Southern Russia
Azerbaijan and Russia
Nebraska ranked 35 and Iowa
Nebraska 26 and Iowa
Skagerrak ( named after Skagen ) and the Kattegat
Skagerrak Strait and the Kattegat
Africa , Antarctica , and the Arabian Peninsula
Africa , the Arabian Peninsula
South Sudan and Sudan
South Sudan , and Sudan
Propylaea , Parthenon
Chennai , Chengalpattu
Chennai on National Highway 45 ( NH45 ) south of Chengalpattu
Wales , Northern Ireland and England
Wales and England
Maine which is geographically within Casco Bay in the Gulf of Maine
Israel and Jordan
Israel , Jordan
Mecklenburg-Vorpommern ( in 1947 renamed : Mecklenburg ) , Brandenburg
Kentucky , Ohio , Indiana , and Illinois
Kentucky ; Bloomington , Illinois
Serbia to the northeast , Bulgaria
Serbia ( 10.3 % ) , Bulgaria
Lithuania to the east ; and the Baltic Sea and Kaliningrad Oblast
Lithuania and Russia ( Kaliningrad Oblast
Uganda , Sudan
Uganda and southern Sudan
Portofino and Santa Margherita Ligure
Portofino , caught 82 bus to Santa Margherita Ligure
Enschede , Groningen , Heerenveen , Hengelo
Port of Houston along the Houston Ship Channel
Lunda Sul and Lunda Norte
Myanmar ( formerly Burma ) , Thailand
Myanmar , and Thailand
Guatemala , Honduras
Guatemala , Honduras
Singapore ( 40 km away ) , and Johor Bahru
Abruzzo , Marche
Potsdam , MÌnster , Hannover and Berlin
Potsdam , near Berlin
Kazakhstan and Uzbekistan
Kazakhstan , ( in Lake Balkhash and Lake Alakol ) , Uzbekistan
Mali , Mauritania
Mali , Niger , Mauritania
Liechtenstein , Austria
Liechtenstein , Germany , and Austria
Brandenburg , Saxony
Brandenburg , and a small piece of Saxony
Mali , Mauritania , Niger
Mali and Niger
Lazio and Abruzzo
Lazio and Abruzzo
Queens and Brooklyn
Queens ( Queens County ) and Brooklyn
Oklahoma and Texas
Oklahoma , and Texas
Republic of Macedonia to the south , Albania
Republic of Macedonia and eastern Albania
Lake Michigan , somewhere in Wisconsin
Lake Michigan , and approximately 24 km ( 15 mi ) south of the Wisconsin
Canada 's east coast navy base and home port to the Atlantic
Canada 's east coast navy base and home port to the Atlantic
Saskatchewan ; Winnipeg
Iraq , Kuwait
Iraq , Qatar and Kuwait
Poland , Belarus , Ukraine
Poland , Lithuania , Bielorussia and the Ukraine
Burrard Inlet lying between Vancouver
Burrard Inlet and connects the City of Vancouver
Mato Grosso do Sul , but it extends into Mato Grosso
Mato Grosso do Sul but extends into Mato Grosso
Panama , Costa Rica
Panama and Costa Rica
Mozambique and Malawi
Mozambique , Zimbabwe , Malawi
Burundi , Tanzania across Lake Tanganyika
Burundi , on the north End of Lake Tanganyika
Moxico , Lunda Sul
Jordan and Israel
Jordan , Israel
Brooklyn and Queens
Brooklyn , Queens
Rajasthan and Haryana
Rajasthan to the west , Haryana
Shoreditch , and Hoxton
Shoreditch ( of which Hoxton
Chuvashia , Bashkortostan , Tatarstan
Vancouver to the banks of the Fraser River
Lake Victoria to the south , the Nandi Escarpment to the East , Uganda
Lake Victoria and beyond to Uganda
Atlantic Ocean ; North America and the Caribbean Sea
North Atlantic Ocean , on the southeast by the Caribbean Sea
University of California , San Diego ( UCSD ) stood at a busy La Jolla
South Asia ( and Southeast Asia
South Asia , [ 29 ] and Southeast Asia
Stanislaus County , Santa Clara County
minnesota , northeastern iowa
Minnesota , Iowa
North Africa , West Africa
North Africa , West Africa
Russia and Mongolia
Russia to the east , with Mongolia
CNN Center and Philips Arena
Renfrewshire , near Glasgow
Renfrewshire , near Glasgow
Ukraine and Slovakia
Ukraine . They are found also in Austria , Slovakia
Lake Tanganyika , and Zambia
Albania , Croatia , Serbia
Albania , Romania , Serbia
Brazil , French Guiana , Guyana
Brazil , Vietnam , Guyana
Albania to the west and Montenegro
Albania , we made our way into Montenegro
Westchester , Rockland
Pakistan and the state of Gujarat
Pakistan . The state borders Pakistan to the west , Gujarat
Guyana , eastern Colombia , southern Venezuela
Guyana , Venezuela
Waterford , and the neighboring part of County Cork
Waterford , and the neighboring part of County Cork
Warrick County and at least 18 were killed in Vanderburgh County
San Francisco , California , along San Francisco Bay
San Francisco , California , along San Francisco Bay
Abruzzo , Molise
Abruzzo and Molise
East Africa , the Horn of Africa , North Africa
east , west , north
East Malaysia , East Timor , Indonesia
East Malaysia , East Timor , Indonesia
Georgia , South Carolina
Georgia , South Carolina
Bexley , Bromley
Bexley , Bromley
Northern Territory , Queensland
Northern Territory / Queensland
Irish Sea between the islands of Great Britain and Ireland
Irish Sea between Great Britain and Ireland
Belarus , Latvia
Belarus , Lithuania , Latvia
Nevada and Arizona
Nevada and Arizona
New Mexico , and parts of Texas
New Mexico , Texas
Wisconsin , Minnesota
Wisconsin ; Saint Paul , Minnesota
Lombardy , Piedmont
Lombardy and eastern Piedmont
Pakistan , Iran
Pakistan , Iran
Hebei Province with the exception of neighboring Tianjin Municipality
Hebei Province ( which surrounds Beijing and Tianjin
Dominican Republic & Haiti
Dominican Republic . French is spoken in Haiti
Bolivia and Peru
Bolivia and Peru
Mato Grosso do Sul and Minas Gerais
Mato Grosso do Sul , Minas Gerais
Afghanistan , Pakistan and Iran
Afghanistan and Iran
Daly City , South San Francisco
Daly City , South San Francisco
Tasman Sea , to the south and east by the Pacific Ocean
Tasman Sea , to the south and east by the Pacific Ocean
Tanzania and a southern port of Lake Victoria
Tanzania to the south . Lake Victoria
Namibia - Angola
Namibia and Angola
North Korea and China
North Korea . 6 China
Belarus to the east ; and the Baltic Sea , Lithuania
Belarus , Lithuania
Lebanon and Syria
Lebanon , Syria
Turkmenistan , and Uzbekistan . The nations of Afghanistan
Turkmenistan to the north , Afghanistan
Nigeria , Ghana , Benin
Nigeria , Benin
Hamburg , Lower Saxony , Schleswig-Holstein
Hamburg and then head further north into Schleswig-Holstein
Liberia and neighbouring Sierra Leone
Liberia and Sierra Leone
Ireland : A small town on the coast of the Irish Sea
Vermont and later Syracuse University in New York
Vermont ( which had been disputed between New Hampshire and New York
Iowa , northeastern Nebraska and southeastern South Dakota
Iowa , South Dakota
western United States , while hhgregg remains competitive in the eastern United States
Baja California in the San Felipe Desert , and in a small area of California
Apache and Navajo
Loiret , and Yonne
Marin County , California , and is the largest island in the San Francisco Bay
Queensland and east of South Australia
Queensland and South Australia
Adriatic Sea with the Ionian Sea and Italy with Albania
Carnegie Mellon University and the University of Pittsburgh
Carnegie Mellon University , the University of Pittsburgh
Rub Al Khali , to a little oasis of date palms they call Liwa
Gurgaon , near Delhi
Gurgaon . The city is on the outskirts of Delhi
Somalia and Somali-speaking regions of Kenya and Ethiopia
Somalia to the northeast , Ethiopia
Azerbaijan and Armenia
Azerbaijan , Georgia , Armenia
Maine – New Brunswick
Maine and northwestern New Brunswick
Italy to the south and Austria
Italy , Germany , Austria
Tula or Kaluga
Shanxi , Hebei
Shanxi , Shaanxi and Hebei
Liberia , Guinea
Liberia , Guinea
Haiti and the Dominican Republic
Haiti , the Dominican Republic
Russia , and Ukraine
Russia , Slovakia , Ukraine
Glasgow , others identify more with the more rural Dunbartonshire
Jammu and planned to visit Vaishno Devi Temple
River Thames , connecting the City of London
River Thames , Isle of Dogs and the City of London
Democratic Republic of the Congo by Lake Kivu
Democratic Republic of the Congo by Lake Kivu
Belarus and Russia
Belarus , Russia
France , Andorra
France , Andorra
Iraq , Syria
Iraq while it is banned in Syria
Rwanda and partly in Mgahinga Gorilla National Park , Uganda
Rwanda , Burundi , Uganda
Slovakia , Germany and Austria
Slovakia , Germany and Austria
Manitoba , and northern and north-central Minnesota
Manitoba , and northern and north-central Minnesota
Sea of Azov , Black Sea
Sea of Azov , northwestern Black Sea
Montana and the Canadian province of British Columbia
Montana , bordering the Canadian provinces of Alberta and British Columbia
Atlantic Ocean that separates southern England from northern France
Atlantic Ocean for a while by a channel across France
Switzerland , and Austria
Switzerland to its west and by Austria
East and Central Africa
east and central Africa
Saudi Arabia , then raised in Jordan
Saudi Arabia , Israel , Jordan
River Thames between Putney
River Thames between Putney
Massachusetts , and New York
Massachusetts that the Catskills do in New York
Poland , Russia
Poland , Serbia , Russia
Eritrea to the north , Djibouti
Eritrea , Djibouti
Mongolia and Russia
Mongolia , Russia
Atlantic , Pacific , and Arctic
Atlantic Ocean , a segment of the Arctic Ocean
River Thames in the heart of the London borough of the City of Westminster
River Thames in the City of Westminster
Bab-el-Mandeb , between Yemen and Djibouti
Uzbekistan , Azerbaijan , Turkmenistan , Kazakhstan , and Kyrgyzstan
Uzbekistan , Kyrgyzstan
Mongolia and north China
Mongolia , among the Mosuo people in China
Arizona , and New Mexico , and south into Sonora
Arizona , Sonora
Israel / Palestine , Egypt
Israel to withdraw their troops from Egypt
Libya and Tunisia
Libya , Mauritania , Morocco , Tunisia
Texas . There is another band in the Mexican state of Coahuila
Texas and the Mexican states of Chihuahua , Coahuila
Atlantic ocean current that originates in the Gulf of Mexico
Atlantic Ocean , to the northwest by the Gulf of Mexico
Kazakhstan , China
Kazakhstan , Mongolia , China
Uganda , Rwanda
Uganda to the north , Rwanda
Shasta , Siskiyou
Stonnington , Melbourne including Docklands and Yarra
Florida , Georgia
Florida , Georgia
Hebei , Shandong
Hebei , Shandong
Vietnam , Laos
Vietnam and by road to Burma and Laos
Togo , [ Democratic Repuglic of Congo ] , Ghana
Togo , Benin , and by a few in Ghana
Srikakulam and Visakhapatnam . The Taluks of Vizianagaram
Srikakulam , Vizianagaram
City of London financial district , adjacent to the River Thames
City of London and north of the River Thames
Mexico . The range extends parallel to the coast of the Gulf of California
Queanbeyan , and Canberra
Queanbeyan and east of the national capital , Canberra
Atlantic Ocean , south of Ireland
Atlantic Ocean in a warship , visiting England and Ireland
Somalia , Kenya
Somalia to the east , and Kenya
Canada and Alaska . It is a threatened species in the contiguous United States
Canada and Alaska , all of the contiguous United States
Massachusetts ( e.g . Lowell and Lawrence ) and Rhode Island
Massachusetts , Rhode Island
Cambodia , Laos , Thailand
Cambodia to the south and Thailand
Cape Town , and across the Indian Ocean
Cape Town , South Africa and her eventual shipwreck in the middle of the Indian Ocean
South Asia , Central Asia
South Asians and Central Asians
pampas after which it is named , and Patagonia
pampas after which it is named , and Patagonia
Poland , Lithuania
Poland , Lithuania
Honduras , El Salvador
Honduras and El Salvador
California , Oregon
California occurs on private land . In Oregon
Washington & Vancouver Island in the province of British Columbia
Washington state and the Canadian province of British Columbia
Vizianagaram , Visakhapatnam
India , Indochina , Malaysia , and China
India with Tai Chi movements from China
Chicago . Last night I took a cruise on Lake Michigan
Chicago on Lake Michigan
Liechtenstein , which is bordered by Switzerland
Liechtenstein and Switzerland
Colombia , Panama , Peru
Colombia , Cuba , Peru
Drenthe , Overijssel
Drenthe , Lingen , Wedde , and Westerwolde the Lordship of Overijssel
London Borough of Tower Hamlets , with the far northern parts falling within the London Borough of Hackney
Nigeria and Cameroon
Nigeria , Cameroon
Oaxaca , Guerrero
Oaxaca , Guerrero
Tanzania , Burundi and the Democratic Republic of the Congo
Tanzania , Burundi and the Democratic Republic of the Congo
France and Flanders
France and Flanders
Emilia-Romagna and the republic of San Marino to the north , Tuscany
Emilia-Romagna and Tuscany
Uzbekistan , Afghanistan
Uzbekistan , are also affected . More than 80 % of Afghanistan
Ontario and the U.S. state of New York
Ontario and the U.S. state of New York
Canada , the Upper and Lower Peninsulas of Michigan
Canada , [ 10 ] and in New York , Michigan
Erlangen and Nuremberg
Erlangen , episode 6 in Offenbach , and Spear of Destiny in Nuremberg
Brazil , Bolivia
Brazil , Bolivia
Kenya with refugees mainly from Sudan
Kenya to the south , and Sudan
Sierra Leone and Liberia
Sierra Leone , Bosnia , Congo and Liberia
Atlantic oceans , including the Baltic Sea
Atlantic and Pacific Oceans as well as those of the Baltic
Kilkenny and Tipperary
Kilkenny , Tipperary
Victoria to the Mount Lofty Ranges in South Australia
Victoria , South Australia
Suriname , Guyana
Suriname , Malaysia , Guyana
Ohio and in Kentucky
Ohio , Kentucky
Feucht , near Nuremberg
Botswana is a landlocked country , surrounded by South Africa
Botswana is a landlocked country , surrounded by South Africa
East River between Manhattan
East River on the Queensboro Bridge into Manhattan
Saudi Arabia and Yemen
Saudi Arabia and Yemen
Zimbabwe ; to the east are Mozambique
Zimbabwe , Mozambique
Elbe valley south-east of Dresden
River Elbe in the city of Dresden
Zimbabwe , Namibia , Botswana
Zimbabwe , Botswana
Nevada , Oregon
Nevada , California , and Oregon
Nebraska and Kansas
Nebraska , Kansas
Botswana , Zimbabwe and Zambia
Botswana , Mozambique and Zambia
Ontario and Detroit
Ontario and Detroit
Wyoming , while Canada is to the North , and the states of Utah
Wyoming , Utah
India and the Sindh
India . The Thaheem tribe in Sindh
New York and Vermont
New York , South Carolina , and Vermont
CH ( Chester ) , L
Gulf of California or Pacific Ocean
Slovakia , the Czech Republic
Slovakia and the Czech Republic
Carmel-by-the-Sea and continuing to Big Sur
South Dakota and Wyoming
South Dakota , Texas , Washington , and Wyoming
South , and Western Asia
South , Central and Western Asia
Democratic Republic of the Congo , Rwanda , Burundi
Democratic Republic of the Congo , Burundi
Western Cape and the Northern Cape
Western and Northern
New Hampshire , and Vermont
New Hampshire , Vermont
Pennsylvania , New Jersey , Delaware
Pennsylvania , and Delaware
Kentucky , Ohio
Kentucky and Ohio
Brandenburg , Saxony-Anhalt , eastern parts of Lower Saxony
Brazil , Chile , Colombia
Brazil ; Colombia
Liberia , and the Ivory Coast
Liberia , Ivory Coast
Hebei and Inner Mongolia
Hebei and Inner Mongolia
Niger is a totally different country from Nigeria
Niger , through northern Nigeria
Guinea and Guinea-Bissau
Guinea , Guinea-Bissau
Lake Geneva , near the city of Geneva
Lake Geneva , near the city of Geneva
Mexico [ 13 ] , as well as some parts of the United States of America
Mexico , the Antilles , southeastern United States
South Dakota in the eastern side and Montana
South Dakota , Minnesota , and Montana
British Columbia and northwest Montana
British Columbia , Montana
Ontario , on the south by the U.S. states of Ohio , Pennsylvania
Ontario , and eastern Pennsylvania
Montana , and North Dakota , the Canadian provinces of Saskatchewan
Montana , North Dakota , and Saskatchewan
El Salvador , Guatemala
El Salvador , Honduras , and Guatemala
New Hampshire , Vermont , Massachusetts
New Hampshire and Massachusetts
Santa Fe , Sandoval
Egypt of 234 Palestinians and non-Arabs jailed in Israel
Egypt , Greece , Leba non , Israel
Ancaster , Dundas
Ancaster , Dundas
Lesotho , South Africa
Lesotho , South Africa
Idaho , Nevada
Idaho on the north and Nevada
Pacific coast between Washington to the north , California
Pacific Ocean from northern China to California
Croatia ( 16,500 ) , the Czech Republic ( 14,600 ) and Slovenia
Croatia and headed straight through Slovenia
Sandoval , Bernalillo
Poland , Slovakia
Poland , Slovakia
Democratic Republic of Congo , Mozambique , Angola
Democratic Republic of Congo , Mozambique , Angola
Winchester from home in Southampton
Winchester , Southampton
Bonner County . The southern tip is in Kootenai County
Oregon border , and 110 miles ( 177 km ) north of the Nevada
Oregon , California , Nevada
Vermont , Massachusetts
Vermont , Massachusetts
Siskiyou , Tehama , Modoc
Mediterranean Sea off the coast of Toulon , France
Mediterranean Sea at Cap-Martin , France
Djibouti , Ethiopia
Djibouti to the west , Ethiopia
Australia , New Caledonia
Australia and New Caledonia
Boalsburg near State College
Boalsburg near State College
Oakland , Berkeley , and San Francisco
Oakland , Berkeley , and San Francisco
Staten Island and Brooklyn
Staten Island and Brooklyn
Monmouth and Ocean County
Atlantic Ocean . It shares land borders with Angola
Atlantic coast . It shares borders with Angola
Nigeria to the southwest , and Niger
Nigeria and southeastern Niger
Pacific coast of Mexico
Pacific Ocean February 19 off the coast of Mexico
Slovenia and Hungary
Slovenia , The Philippines , Hungary
Congo ) / Angola
Ghana , and became president of Togo
Ghana and Togo
Atlantic coast south of central Maine
Scania and perhaps Halland and Blekinge
Hebei , Shandong , Jiangsu
Aosta Valley and Piedmont
Southwark , Lewisham
Southwark and Lewisham
Beqaa , North Lebanon , South Lebanon , Mount Lebanon
Beqaa , North Lebanon , South Lebanon , Mount Lebanon
Ivory Coast , Liberia
Atlantic Ocean and the North Sea
Atlantic Ocean , the North Sea
North Carolina ( now eastern Tennessee
North Carolina , Tennessee
Nebraska , North Dakota , and South Dakota
Nebraska , New Mexico , South Dakota
Netherlands , Belgium
Netherlands and Belgium
Gabon and Equatorial Guinea
Gabon and Equatorial Guinea
United States of America to the east , and Canada
United States and Canada
Dubai Marina district ) and the port facilities at Jebel Ali
Senegal , Guinea , Guinea-Bissau
Senegal , Mali , Guinea-Bissau
E postal code area plus IG
Atlantic Ocean in the Buenos Aires Province
Rwanda , Tanzania
Rwanda , Democratic Republic of the Congo , Tanzania
Nesher ( which is right underneath Haifa
Nesher , Tirat Hakarmel , and the city of Haifa
Solana Beach in San Diego County . The Pacific Ocean
Kenya , and Tanzania
Kenya , Rwanda , Tanzania
Alderley Edge , Wilmslow
Alderley Edge and back it was 16 miles . If I ran to Wilmslow
Guinea , Mali , Senegal
Guinea , Senegal
Molise . Chilies ( peperoncini ) are typical of Abruzzo
Molise , Abruzzo
Rajasthan , near the border with Pakistan
Rajasthan , near the border with Pakistan
Indiana , Iowa , Kentucky
Indiana , Kentucky
Afghanistan and Iran , southern Uzbekistan
Afghanistan , southern Uzbekistan
South Sudan , on the west by the Democratic Republic of the Congo
South Sudan , Central African Republic , and the Democratic Republic of the Congo
contiguous United States and northern Mexico
contiguous United States , and northern Mexico
Shasta , Siskiyou , Tehama
Tennessee , Alabama & Georgia
Tennessee , Virginia & Georgia
Nunavut and the Northwest Territories
Nunavut and the Northwest Territories
Nevada and California
Nevada began to grow in the 1980s as well . Although California
Golden Gate Bridge in San Francisco
Golden Gate Bridge in San Francisco
Arctic Circle , or , more generally , in the arctic
Arctic Circle , or , more generally , in the arctic
Formosa , Salta
Vukovar , Dubrovnik and Osijek
Kansas , and Nebraska
Kansas , and Nebraska
Iowa , Nebraska
Iowa . In 1889 -90 he was a member of the Nebraska
Gulf of Mexico , Louisiana , Texas
Gulf of Mexico , Brownswille , Texas
Beqaa , North Lebanon
Beqaa , North Lebanon
Hadrian 's Villa and Villa d'Este
Hadrian 's Villa and the remarkable Villa d'Este
Botswana , Zambia again , and Zimbabwe
Botswana , Zimbabwe
Propylaea , the Temple of Athena Nike , the Erechtheion
County Tipperary / County Kilkenny
United States of America . It is situated between Sarasota Bay and the Gulf of Mexico
United States of America . Florida lies between the Gulf of Mexico
Bikaner , Jodhpur
Bikaner , Jaisalmer , Khuri , Jodhpur
Mauritania and Senegal
Mauritania , Sierra Leone , Senegal
Redland , and Cotham
New York and the larger Horseshoe Falls , Ontario
New York ; and the Canadian province of Ontario
Gaza Strip and Israel
Gaza Strip . The related Arab Christians in Israel
Bab-el-Mandeb , between Yemen
Park Slope and Windsor Terrace
St. Michael 's Mount , a small peninsula just off the coast of Marazion
Romania and Serbia
Romania and Serbia
Emilia-Romagna 7.650 and 6.184 Friuli-Venezia Giulia
University of Alabama at Birmingham ( UAB ) and its adjacent hospital . The UAB Hospital
University of Alabama at Birmingham ( UAB ) and its adjacent hospital . The UAB Hospital
Bhubaneswar , Cuttack , Puri
Bhubaneswar are known as Golden triangle of eastern India . Puri
Misiones Province has a heavier taste than Corrientes Province
Porto and Vila Nova de Gaia
Pakistan , India
Pakistan and India
Enfield , Hackney , Haringey
Ukraine to the east and Hungary
Ukraine to the east and Hungary
Peel , and York
Peel , and York
West Bridgford , Nottingham
English Channel and the North Sea
English Channel and later large areas of the North Sea
Indiana , Michigan
Indiana , Michigan
Oregon , Idaho
Oregon , Washington , Idaho
Guinea , Mali
Guinea , and Mali
Atlantic Ocean , the North Sea , the English Channel
Atlantic Ocean , the North Sea , the English Channel
Croatia , Bosnia and Herzegovina
Croatia , the Croatian parts of Bosnia and Herzegovina
Ethiopia , Somalia
Ethiopia and Somalia
Texas and New Mexico
Texas , New Mexico
Tennessee , Kentucky
Tennessee to the south ; by Kentucky
Lower Saxony the location of Neuengamme and the city of Hamburg
Kenya , Uganda
Kenya and Uganda
Namibia , Botswana
Namibia , southern Angola , western Botswana
British Isles into the North Sea
Anne Arundel County , Baltimore County
North America , South America
North America , and Oceania but are sparse in South America
Black Sea coast of Bulgaria
Black Sea coast of Bulgaria
Puntland and Somaliland
Puntland ( which considers itself an autonomous state ) and Somaliland
State of Israel completes its unilateral disengagement from the Gaza Strip
Israel , the West Bank , the Gaza Strip
Mexico City , Guadalajara and Puebla
Mexico City , such as Tepoztlán , Cuernavaca and Puebla
Philips Arena . It is now operated by the Georgia World Congress Center
Tajikistan in the north , and China
Tajikistan in the north , and China
France and Spain on the Atlantic
France , that crashed in the Atlantic Ocean
Sweden to the Caspian by way of the Baltic Sea
Sweden across the Baltic Sea
George C. Marshall Space Center in Huntsville
Atlantic and continued into Spain
Atlantic Ocean and reached the Americas in 1492 under the flag of Spain
California , Washington State , Mexico
California and in the Pamean languages of Mexico
Sahel area south of the Sahara
Sahel to the encroaching Sahara
South Australia , New South Wales
South Australia and Western Australia , 15 for Queensland , 16 for New South Wales
Kaliningrad Oblast , and Lithuania
Tajikistan , Uzbekistan
Tajikistan , and Uzbekistan
Katra , home to the Vaishno Devi
Katra , home to the Vaishno Devi
Mozambique , Namibia , South Africa
Mozambique , Namibia , South Africa
Barents Sea off the coast of Norway
Atlantic Ocean and to the south is the Celtic Sea
Atlantic Ocean to the left , the Irish Sea to the right and the Celtic Sea
Solomon Islands . It is closely related to Tok Pisin of Papua New Guinea
Solomon Islands and Papua New Guinea
Zimbabwe , Zambia
Zimbabwe , Zambia
Pwani and Lindi
Germany which sank into the North Sea
Germany . It is situated on the shore of the North Sea
New Mexico , then Colorado
New Mexico , Colorado
Mexico State , 100 km ( 62 milles ) , northwest of Mexico City
Sindh , Punjab and Balochistan
Sindh , Punjab and Balochistan
Niger in the southeast , Mali
Niger and Mali
Pacific Oceans ; it has been recorded off the coasts of Canada
Pacific and Atlantic Oceans , bordered by Canada
Washington , Oregon
Washington and Oregon
Lake of the Woods counties , approximately 1 mile north of Beltrami
Sequoia and Kings Canyon
Sequoia National Park / Kings Canyon National Park
Saxony , Margrave of Brandenburg
Saxony and Brandenburg
Uzbekistan , Kyrgyzstan , and Kazakhstan
Uzbekistan to the northeast , Kazakhstan
California . The city is located on the coast of the Pacific Ocean
California . The city is located on the coast of the Pacific Ocean
Lake Kivu in Rwanda
Lake Kivu in Rwanda
Tulare and Inyo
Waterford , Wexford and Kilkenny
Ghana and the Ivory Coast
Ghana , Ivory Coast
Pacific Ocean . To the north it borders Nicaragua
Pacific lowlands of Nicaragua
Kentucky , Missouri
Kentucky and Missouri
Idaho , Montana , Wyoming
Idaho , Wyoming
Portugal and the Atlantic Ocean
Montana and South Dakota
Montana , North Dakota , South Dakota
Zimbabwe and South Africa
Zimbabwe to the west and Swaziland and South Africa
Thuringia , Bavaria
Thuringia , Bavaria
Elbe River which I cross just a few miles east of Hamburg
Elbe near Hamburg
Bikaner , Bundelkhand and Jaisalmer
Bikaner , Bundelkhand and Jaisalmer
Sindh to the east . To the south lies the Arabian Sea
Sindh along the Indus River to Arabian Sea
Rajasthan , Punjab
Rajasthan , Andhra Pradesh , Punjab
Poland and Germany
Poland , Slovakia , Germany
Loiret , Allaines and Allainville , Eure-et-Loir
Cleethorpes , near Grimsby
Georgia , South Carolina , North Carolina
Georgia , South Carolina and North Carolina
Malawi , Tanzania
Malawi and Mozambique , and between Malawi and Tanzania
Roseau County is the result of a split of the neighboring Kittson County
Texas , and in northern Mexico , in Tamaulipas
Texas . In Central America on the Atlantic versant from Tamaulipas
Canada to northern Mexico , including most of the continental United States
Canada , all of the continental United States
South and East Asia
South and East Asia
Eritrea breaks off from Ethiopia
Eritrea , Italian Somaliland , Ethiopia
Chandigarh and 50 km from Panchkula
Chandigarh and 50 km from Panchkula
Southeast Asia and parts of South Asia
Southeast Asia and South Asia
Wisconsin , Iowa , Illinois
Wisconsin and Illinois
Atlantic Canada and New England
Atlantic Canada and New England
Romania and Moldova
Romania and Moldova
Denmark and in the subsurface of the southern part of the North Sea
Denmark , connecting the North Sea
Somalia , Kenya and Djibouti
Somalia , but we 've got bases in Djibouti
Ethiopia , particularly in Eritrea
Ethiopia and Tigrinya in Eritrea
Maine , and empties into the Atlantic Ocean
Maine and Disparu to the west , the Atlantic Ocean
Red Sea , Jacobovici argued a marshy area in northern Egypt
Red Sea and Egypt
Poland , west and central Belarus
Poland ( both via Kaliningrad Oblast ) , Belarus
Atlantic Ocean north of Boston on the New Hampshire
North and East Africa
North Africa , and East Africa
Lake Tanganyika , Burundi
Lake Tanganyika and close to the border with Burundi
Armenia , Australia , Austria , Azerbaijan
Armenia , Azerbaijan
Saudi Arabia and Kuwait
Saudi Arabia to evict Iraqi forces from Kuwait
Wyoming , although it also extends into Montana and Idaho
Wyoming extending into portions of Montana and Idaho
Kenya , Mozambique , Somalia
Kenya , until 1949 . That year , his unit was deployed to Somalia
Guinea-Bissau , Guinea
Guinea-Bissau , Guinea
Germany . It then spread to France
Germany , Austria , France
Hispaniola , Puerto Rico
Hispaniola and Puerto Rico
East , South
East Asia , South Asia
San Diego County , with portions extending east into Imperial County
San Diego and Imperial
Saxony , Thuringia , Bavaria
Saxony and Bavaria
Magic Kingdom , Disney 's Contemporary Resort
New Westminister , Port Moody , Coquitlam
Nepal , China
Nepal and China
Lower Saxony , North Rhine-Westphalia , Schleswig-Holstein
Lower Saxony and Schleswig-Holstein
Hesse , Thuringia
Hesse , Thuringia
Oman , Qatar and Saudi Arabia
Oman , Qatar , Saudi Arabia
Ontario , and Michigan
Ontario and Quebec . [ 3 ] Michigan
Swaziland , Mozambique
Swaziland near Mozambique
Lake Ohrid in the Republic of Macedonia
Lake Ohrid in the Republic of Macedonia
Pankow , Prenzlauer Berg , and Mitte
South Asia though India and its neighbours are on or near the Indian Ocean
South Asia , namely the northern Indian Ocean
Monterey , San Benito
Uzbekistan and Tajikistan
Uzbekistan and Tajikistan
Serbia , Montenegro
Serbia and Montenegro
Brazil , Suriname
Brazil and Suriname
Gujarat and Rajasthan
Gujarat .Whole or a larger part of Rajasthan
Vancouver is the proximity to the outdoors . False Creek
Monterey County , San Luis Obispo County
Washington Territory was formed from part of Oregon Territory
Washington Territory was split out of the existing Oregon Territory
Western Australia , Northern Territory
Western Australia , the Northern Territory
British Columbia , and the Canadian territory of Yukon
British Columbia to Delta Junction , Alaska , via Whitehorse , Yukon
Arizona , northwest New Mexico
Arizona , Colorado , New Mexico
New York City and New Jersey
New York City , plus New Jersey
Pennsylvania , New Jersey
Pennsylvania , New Jersey
Brazil , Argentina , Uruguay
Brazil and parts of Colombia , Uruguay
Toronto , Canada . They are located in Lake Ontario
Ukraine and eventually Russia
Ukraine and Russia
Oklahoma , Missouri
Oklahoma , Tennessee , Missouri
Mexico , Costa Rica , Guatemala
Mexico and Guatemala
Serbia to the north , Albania
Serbia , Montenegro , Albania
Arizona , and California
Arizona , California
Australia and incorporated in Vanuatu
Australia , Vanuatu
Japan , China
Japan and China
Thailand , Peninsular Malaysia
Thailand , Vietnam and Peninsular Malaysia
Helsinki , Espoo , Vantaa
Maharashtra or Gujarat
Maharashtra , southern Gujarat
Democratic Republic of the Congo , southwestern Sudan
North Uist and Benbecula
North Uist and Benbecula
Texas , Virginia , Arkansas
Texas , but its range extends into Louisiana , Arkansas
Oakland , and Macomb
Rhineland-Palatinate , North Rhine-Westphalia , and Hesse
Espoo , near the capital Helsinki
Espoo , a city neighbouring Finland 's capital Helsinki
Bahrain , and Saudi Arabia
Bahrain , Saudi Arabia
Alberta , British Columbia
Alberta and British Columbia
Blanco and Gillespie County
Yelagiri near Jolarpet
southern Africa and occurs throughout much of eastern Africa
Southern Africa , Central Africa , East Africa
Arlington County , the City of Alexandria , Fairfax County
Arlington County , the City of Alexandria , Fairfax County
Brazil and has been erroneously reported to occur in Venezuela
Brazil and Venezuela
Rwanda , Burundi , the Democratic Republic of the Congo
Rwanda , Democratic Republic of the Congo
Washington . The river 's current often dissipates into the Pacific Ocean
Equatorial Guinea to the northwest , Cameroon
Equatorial Guinea , Kenya , and Cameroon
Baltic Sea to the west lies Sweden
Baltic Sea , whereby to the west lie Sweden
Victoria , New South Wales
Victoria and southern New South Wales
Slovenia , Croatia
Slovenia , Croatia
Afghanistan . It is also spoken in Turkmenistan
Afghanistan , Uzbekistan , Turkmenistan
New Jersey across the Hudson River to the top of a New York City
New Jersey , New York City
Switzerland , and Italy
Switzerland , Italy
North Finchley and East Finchley
Ocean and Burlington
Benin , and Burkina Faso
Benin Numbers unknown , in Burkina Faso
Kuwait and Saudi Arabia
Kuwait and Saudi Arabia
Bridge of Allan , Dunblane
Serbia , Hungary
Serbia , Hungary
Italy ( 1653 ) , Switzerland
Italy , Switzerland
Baltic Sea , the North Sea and the north east Atlantic Ocean
Gulf of Oman , separating the Strait of Hormuz
Niger with emergency status , as well as Chad
Niger , Chad
South Dakota , Nebraska
South Dakota - Nebraska
Ontario , Wyoming , Ohio
Ontario , on the south by the U.S. states of Ohio
Oregon before emptying into the Pacific Ocean
Thuringia ( though without Hesse
Thuringia , Hesse
Ireland , and the United Kingdom
Ireland and the United Kingdom
Atlantic Ocean and Mediterranean Sea and placing it between Europe
Atlantic to make a start on the continent of Europe
Gulf of California , proving that Baja California
Gulf of California between the Baja California peninsula
Germany , Luxembourg
Germany , Belgium , Luxembourg
New Hampshire to the north ; at its east lies the Atlantic Ocean
New Hampshire and Maine , and empties into the Atlantic Ocean
Ivory Coast , Ghana , Guinea
Spain , Portugal , Morocco
Spain brought a great number of the exiles to Morocco
Colombia , Venezuela
Colombia , Venezuela
Montana and North Dakota
Montana , New Hampshire , North Dakota
Lake Erie peninsula in Sandusky , Ohio
Lake Erie off the coast of Ohio
New South Wales , Victoria
New South Wales , Queensland , Victoria
Haringey and Waltham Forest
Solomon Islands , New Guinea , northeastern Australia
Solomon Islands and Australia
Thailand , Cambodia
Thailand , Cambodia
Eritrea , Sudan
Eritrea to the north and Sudan
Rwanda , Burundi
Rwanda , Burundi
Hudson Bay , Atlantic Ocean
Chaco , Córdoba , Corrientes , Formosa
Chaco , Formosa
Gulf of California , Mexico
Golfo de California , Mexico
Nebraska , Colorado
Nebraska , Colorado
Champaign - Urbana
Mali and Burkina Faso
Mali , Burkina Faso
Wyoming to the north , the midwest states of Nebraska
Wyoming and Nebraska
Arizona : 465 mi ( 748 km ) west-southwest . Salt Lake City , Utah
Arizona , Utah
Republic of Serbia and the self-proclaimed Republic of Kosovo
Republic of Serbia and the self-proclaimed Republic of Kosovo
Gulf of Bothnia , Norrbotten County in Sweden
East Malaysia and Kalimantan
India and sometimes Pakistan
India and Pakistan
Spain , [ 3 ] Andorra
Spain and Andorra
Mozambique , Rwanda , Tanzania
Mozambique , Tanzania
Vietnam , Cambodia
Vietnam , Laos , and Cambodia
Delaware and Maryland
Delaware , Maryland
Bavaria and Hesse
Bavaria and Hesse
Dubai Internet City , Dubai Media City
Dubai Internet City , Dubai Media City
Spain , Gibraltar
Spain and Gibraltar
San Francisco to Silicon Valley
San Francisco , just north of Silicon Valley
California and Arizona
California , Arizona
Indian River County from Brevard County
San Luis Obispo County and on the west by Monterey County
East Prussia ( today Klaip ? da , Lithuania
East Prussia ( today Klaip ? da , Lithuania
Scania and perhaps Halland
Palo Alto and Mountain View
Indiana , and the present day sites of Chicago
Indiana , Chicago
Cambodia , Malaysia , Laos
Cambodia and Laos
Marin County , just north of San Francisco
Marin County , just north of San Francisco
Atlantic Ocean and Portugal
Atlantic Ocean between Portugal
Ireland . On the North American coast of the Atlantic Ocean
Ireland is an island in northwest Europe in the north Atlantic Ocean
Marche , Abruzzo
Marche and Abruzzo
Wyoming , South Dakota , Colorado
Wyoming and Colorado
Luxembourg near the point where the borders of Germany , France
Luxembourg , and France
Massachusetts and New Hampshire
Massachusetts , New Hampshire
Nebraska and Wyoming
Nebraska , Wyoming
Idaho , Montana and Oregon
Idaho , Oregon
Sonoma and Mendocino counties
Vietnam , and even sent missions to China
Vietnam , Guilin in China
Detroit / Southfield
Togo and Burkina Faso
Togo to the west , Nigeria to the east and Burkina Faso
Golden Gate Bridge , the San Francisco Bay
Golden Gate Bridge begins in San Francisco Bay
Lake Champlain in the southwestern part of Chittenden County , Vermont
Lake Champlain being located between Vermont
Mediterranean halfway between Europe
Mediterranean Sea . The dogs probably made their way to Europe
Tajikistan and Kyrgyzstan
Tajikistan ) and the Tian Shan ( Kyrgyzstan
Idaho ! Idaho is the state to the east of Oregon and Washington
Idaho , Maine , Washington
France and the United Kingdom
France and the United Kingdom
Maricopa County and Pima County
Maricopa County and Pima County
Savannakhet . To the west is Thailand
Mozambique , Swaziland
Mozambique and Swaziland
Guatemala , El Salvador and to as far as central Mexico
Guatemala , Japan , Mexico
Ukraine ) and White Russia ( Belarus
Ukraine , 20,000 pairs in Belarus
Kansas , Missouri
Kansas / Missouri
Libya , Sudan
Libya , Sudan
Tanzania , Malawi
Tanzania , Zambia and Malawi
New Galloway , before widening to form the 9-mile long Loch Ken
Western Sahara in the west , Morocco
western Sahara , Morocco
Norway and Sweden ) or even on Helgoland Island in the North Sea
Norway and the North Sea
Lake Victoria ) . To the north east , it borders the Republic of Kenya
Lake Victoria , within which it shares borders with Kenya
Italian Peninsula , Sicily
Italian Peninsula , Sicily
Persian Gulf in the Arabian Peninsula
Persian Gulf and Arabian Peninsula
Black Sea in northern Dobruja
Tanzania and Nairobi , Kenya
Tanzania , Kenya
Italy , France
Italy , northern France
Niger , and Benin
Niger , Benin
India , Bangladesh
India , Pakistan , Bangladesh
Nevada , and Idaho
Nevada , and Idaho
Syria , Lebanon
Syria , Lebanon
Emilia-Romagna to the north , Liguria
Emilia-Romagna to the north , Liguria
Saudi Arabia , and developed operations in Bahrain
Saudi Arabia and Bahrain
Montenegro , Macedonia , Croatia
Montenegro , Croatia
San Luis Obispo County , Santa Barbara County
Delhi , Faridabad , Gurgaon , Noida
Delhi , Baba Ramdev wanted to continue his fast from Noida
Tanzania , Burundi
Tanzania to the east , and Burundi
Ealing , Hammersmith and Fulham , Harrow
India , Pakistan , Afghanistan
India from Afghanistan
Port Moody , Coquitlam
Detroit and then on a ferry to Canada
Detroit and then on a ferry to Canada
Colony of Vancouver Island and Colony of British Columbia
Vancouver Island ( 1849 ) and in British Columbia ( 1858 )
Mexico and California
Mexico and California
Florida , Alabama
Florida , Alabama
Mali , Senegal
Mali , Senegal
South Sudan to the west , and Kenya
South Sudan , which was signed in Naivasha , Kenya
Colombia , Honduras , El Salvador , Panama
Colombia , Panama
Rockland County and Westchester County
Dublin , Wexford , Wicklow
Dublin to Wicklow
United Arab Emirates , Saudi Arabia
United Arab Emirates and also Saudi Arabia
Canada , Australia and the Pacific
Canada to the north . To the west is the Pacific Ocean
Russia and Norway
Russia to the east , and Norway
British Columbia , Montana , Idaho
British Columbia and Idaho
Senegal , Guinea
Senegal , Guinea
Asia and Europe
Asia , Australia , Europe
New Mexico and Arizona
New Mexico and Arizona
South Dakota , Minnesota
South Dakota , Minnesota
Russia , Crimea in Ukraine , Kazakhstan
Russia , Kazakhstan
Italian Peninsula , Sicily , Sardinia
Vanuatu and New Caledonia to the north-east ; and New Zealand
Vanuatu , New Zealand
Barbican Housing Estate and the nearby Golden Lane Estate
Barbican and Golden Lane Estate
Massachusetts and Connecticut
Massachusetts , Rhode Island , and Connecticut
Russia , Belarus
Russia , Belarus
Solingen ; Wuppertal
Collin and Denton County
Collin , Dallas and Denton
Jordan and the West Bank
Jordan and the West Bank
Noida of the national capital of Delhi
Noida , a posh suburb of Delhi
Netherlands and Germany
Netherlands and Germany
Kerry , Cork
Kerry , Cork
Gelderland , and Overijssel
Gelderland and Overijssel
Washington , Idaho
Washington ( 12.9 ) , California ( 12.2 ) and Idaho
Republic of Ireland or in Northern Ireland
Republic of Ireland , Northern Ireland
Kilkenny and Wexford
Paarl , Stellenbosch and Franschhoek
Mediterranean region , ranging from Spain
Mediterranean Sea belonging to Spain
Republic of Macedonia and Bulgaria
Republic of Macedonia , Bulgaria
Afghanistan , Turkmenistan , China
Afghanistan , Vietnam , and China
Minnesota or Wisconsin ; others place them in Winnipeg
Broward County , and Palm Beach County
Broward County , and the southern part of Palm Beach County
Burundi and western Kenya and Tanzania
Burundi , and Western Kenya and Tanzania
Western Asia , South Asia
Western Asia , South Asia
London Borough of Southwark with parts in the London Borough of Lambeth
London Borough of Southwark with parts in the London Borough of Lambeth
Spain , by the Mediterranean Sea
Spain , and even Italy in the Mediterranean Sea
Kansas , Missouri , and Oklahoma
Kansas and Oklahoma
Mauritania and part of Mali
Mauritania to the north , Mali
Vancouver , West Vancouver
Vancouver Village of Belcarra West Vancouver
New Jersey , Connecticut , and Pennsylvania
New Jersey , New York , Pennsylvania
France , Hungary , Italy
France and Italy
Maastricht , which is about 40 kilometers away from Aachen
Maastricht ) , N3 ( E to Aachen
Atlantic Ocean to the east and Canada
Atlantic radio signal in Newfoundland , Canada
Pinellas - Pasco County
Rhode Island , Massachusetts
Rhode Island , Massachusetts
Petronas Twin Towers of the KLCC
Guinea-Bissau , and Senegal
Guinea-Bissau , Guinea , Senegal
Vanderburgh , and Warrick
South Australia , Western Australia and the Northern Territory
South Australia , Western Australia and Northern Territory
Djibouti , Somalia , Somaliland
Delhi , Gurgaon , Faridabad
Delhi and Faridabad
Porto is the Douro River
Gujarat , close to the Pakistan
Gujarat , [ 3 ] and Gujarat did not end up a part of Pakistan
Indonesia and Papua New Guinea , northern and eastern Australia
Indonesia , Fiji and Australia
California and Baja California
California . In Mexico its distribution includes Baja California
Monaco and France
Monaco , San Marino , France
Panchkula - Haryana , U.T . of Chandigarh
Russia , China
Russia , the Caucasus , China
France and Germany , and what is now western Switzerland
France and Switzerland
India , Tibet , Bhutan
India to the south and west ; it is separated from Bhutan
Israel , Jordan , Lebanon
Israel and Lebanon
Oaxaca , Guerrero and Puebla
Mannheim and Heidelberg
Mannheim via Heidelberg
Lake Michigan in western Michigan
Jordan , Lebanon , Syria
Jordan , Lebanon and Syria
East River in Brooklyn
East River and Brooklyn
Heidelberg . The distance between Mannheim
Heidelberg , Mannheim
Czech Republic and Slovakia
Czech Republic and Slovakia
Bangladesh , Bhutan , India
Bangladesh , India
Pacific Ocean from the Gulf of California
Ramat HaSharon and Herzliya
Thailand , Indonesia , Malaysia
Thailand , Malaysia
Somaliland , Puntland
Somaliland and Puntland
Guatemala and Belize in 1994 , El Salvador
Guatemala , and El Salvador
Slovakia , Poland
Slovakia , Poland
Nevada , Utah
Nevada , Utah
Switzerland . There are significant minorities in France
Switzerland , France
Ontario , it is Family Day ; in Manitoba
Ontario in 1967 , Manitoba
Corsica , Sardinia
Corsica ( France ) and Sardinia
Honduras , Guatemala
Honduras , and Guatemala
Tivoli and prepared to starve Rome
France along the North Sea
France ( French Flanders ) and the North Sea
Hebei , which later passed to Beijing
Hebei , Beijing
Haggerston to Hoxton
Switzerland , Austria and Germany
Switzerland , Italy , Germany
Taiwan and as far as China
Taiwan , that former province of China
Russia and part of today 's Lithuania
Russia , Denmark , Lithuania
Uganda plus Lake Victoria
Uganda plus Lake Victoria
Australia , [ 12 ] Indonesia
Australia , New Guinea , parts of Indonesia
Israel and a coalition of Arab states backing Egypt and Syria
Israel in its conflict with Syria
West Virginia , and Pennsylvania
West Virginia and Pennsylvania
Lewisham , with part in The London Borough of Southwark
Egypt to the north , the Red Sea
Egypt to the north , the Red Sea
Slovakia , Ukraine
Slovakia , and Ukraine
Central and South Asia
Central Asia and South Asia
Northern Virginia and Suburban Maryland
Northern Virginia , and Montgomery and Prince George 's Counties in Maryland
Saskatchewan , Manitoba
Saskatchewan , but raised in Winnipeg , Manitoba
Irish Sea is to the north west , the Celtic Sea
Irish Sea is to the north west , the Celtic Sea
Pennsylvania , reaching Wheeling , Virginia ( now West Virginia
Pennsylvania near Chester and New Cumberland , West Virginia
Staten Island , Perth Amboy
East River Waterfront of Queens across the United Nations Headquarters
East River at the United Nations Headquarters
English Channel , the Celtic Sea
English Channel , the Celtic Sea
Wyoming . ( Montana
Wyoming and Montana
Northeastern United States and the Southern United States
Northeastern United States . In the winters , they migrated to the Southern United States
Catamarca , Tucumán and Salta
Catamarca , Tucumán , Salta
University of Pittsburgh ( Pitt ) and Carnegie Mellon University
University of Pittsburgh , Carnegie Mellon University
Libya and Algeria
Libya and Algeria
Pennsylvania , Ohio
Pennsylvania and Ohio
Canada , eastern Alaska and the northeastern
Canada to the Northeastern United States
Pennsylvania , Maryland
Pennsylvania , and is referenced in the official state song of Maryland
East River waterfront to Front St and from the Brooklyn Navy Yard
Ethiopia , Eritrea , and Djibouti
Ethiopia in the south , and Djibouti
Madhya Pradesh , Orissa and Rajasthan
Madhya Pradesh , Rajasthan
Northeastern and Midwestern United States
Northeastern and Midwestern United States
Morocco to the north , Algeria
Morocco , the Mount Atakor massif in southern Algeria
Gujarat , Madhya Pradesh
Gujarat , Madhya Pradesh
Beijing and Shijiazhuang in Hebei
Beijing , it passes through Tianjin and the provinces of Hebei
Lake Superior and Michigan
River Thames in central London , England . It lies within the London Borough of Tower Hamlets
River Thames in central London , England . It lies within the London Borough of Tower Hamlets
Indiana , Illinois
Indiana , Illinois
Czech Republic and has attained top-ten positions in Austria
Czech Republic , with offices in Linz , Austria
Coquitlam , Vancouver and Burnaby
Croatia , Serbia , Hungary
Croatia , Slovakia , Hungary
Democratic Republic of the Congo and Zambia
Democratic Republic of the Congo , Tanzania and Zambia
St George 's Channel and the Celtic Sea
St George 's Channel and the Celtic Sea
Mali to the east , and Guinea
Mali and Guinea
Shiga , Gifu
Lake Huron shoreline on the southeastern tip of the Upper Peninsula of Michigan
Lake Huron shoreline on the southeastern tip of the Upper Peninsula of Michigan
East River between Manhattan and Queens
Portugal , Spain
Portugal and Spain
England and spread into what was to become south-east Scotland
England , and Scotland
Gulf of Finland and comprising the provinces of Karelia , Ingria , Estonia
South Australia , Victoria , and Queensland
South Australia and Queensland
West Bank and Israel
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
###Output
_____no_output_____
###Markdown
The glove_lookup object is a dictionary that represents each word in a dictionary as a 300 dimensional vector. Below the glove middle featurizer takes a kb triple, looks for all the examples in the corpus, splits middle into words, looks them up on the glove vectorizer and appends all. At the end we will have a matrix for two entities with one row for each word. the np.sum np_func option just collapses the matrix into a vector.
###Code
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.825 0.471 0.717 340 5716
author 0.871 0.413 0.713 509 5885
capital 0.553 0.221 0.425 95 5471
contains 0.658 0.406 0.585 3904 9280
film_performance 0.774 0.317 0.601 766 6142
founders 0.754 0.226 0.514 380 5756
genre 0.458 0.065 0.207 170 5546
has_sibling 0.837 0.246 0.566 499 5875
has_spouse 0.878 0.338 0.666 594 5970
is_a 0.658 0.151 0.393 497 5873
nationality 0.608 0.196 0.428 301 5677
parents 0.849 0.413 0.701 312 5688
place_of_birth 0.598 0.210 0.437 233 5609
place_of_death 0.404 0.119 0.274 159 5535
profession 0.571 0.146 0.361 247 5623
worked_at 0.703 0.264 0.528 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.687 0.263 0.507 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
from sklearn.svm import SVC
model_factory = lambda: SVC(kernel = 'linear')
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[simple_bag_of_words_featurizer],
model_factory = model_factory,
vectorize=True, # Crucial for this featurizer?
verbose=True)
return glove_results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.779 0.353 0.628 340 5716
author 0.756 0.603 0.720 509 5885
capital 0.634 0.274 0.502 95 5471
contains 0.769 0.602 0.729 3904 9280
film_performance 0.755 0.616 0.723 766 6142
founders 0.775 0.426 0.666 380 5756
genre 0.518 0.259 0.431 170 5546
has_sibling 0.799 0.255 0.559 499 5875
has_spouse 0.887 0.343 0.674 594 5970
is_a 0.624 0.288 0.506 497 5873
nationality 0.586 0.193 0.416 301 5677
parents 0.796 0.599 0.747 312 5688
place_of_birth 0.591 0.223 0.444 233 5609
place_of_death 0.354 0.107 0.242 159 5535
profession 0.660 0.267 0.510 247 5623
worked_at 0.632 0.306 0.521 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.682 0.357 0.564 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for POS in get_tag_bigrams(ex):
feature_counter[POS] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for POS in get_tag_bigrams(ex):
feature_counter[POS] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
list_w = s.middle_POS.split(' ')
if list_w[0] == '':
return ['<s> ', ' </s>']
else:
results = [(start_symbol + ' ' + list_w[0].split('/')[1])]
for i in range(len(list_w) - 1):
results.append(list_w[i].split('/')[1] + ' ' + list_w[i + 1].split('/')[1])
results.append((list_w[len(list_w) - 1].split('/')[1]) + ' ' + end_symbol)
return results
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
new = []
for element in wt:
synsets = wn.synsets(element[0], pos=convert_tag(element[1]))
for synset in synsets:
new.append(str(synset))
return new
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# The system is a simple one but incorporate many of the features that were tested before or suggested by the question
# prompt. Specifically it incorporates a bidirectional bag of words that also inclused others features: POS tags and
# the length fo the middle section.
def custom_featurizer(kbt, corpus, feature_counter):
from sklearn.neural_network import MLPClassifier
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for POS in get_tag_bigrams(ex):
feature_counter[POS] += 1
feature_counter['length'] = len(ex)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
for POS in get_tag_bigrams(ex):
feature_counter[POS] += 1
feature_counter['length'] = len(ex)
return feature_counter
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
bakeoff_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[custom_featurizer],
model_factory=model_factory,
verbose=True)
# Please do not remove this comment.
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.882 0.594 0.804 340 5716
author 0.881 0.804 0.865 509 5885
capital 0.741 0.421 0.643 95 5471
contains 0.856 0.734 0.829 3904 9280
film_performance 0.854 0.697 0.818 766 6142
founders 0.843 0.566 0.768 380 5756
genre 0.707 0.341 0.582 170 5546
has_sibling 0.923 0.529 0.803 499 5875
has_spouse 0.935 0.631 0.853 594 5970
is_a 0.839 0.535 0.754 497 5873
nationality 0.785 0.618 0.745 301 5677
parents 0.907 0.747 0.869 312 5688
place_of_birth 0.797 0.455 0.693 233 5609
place_of_death 0.726 0.434 0.640 159 5535
profession 0.819 0.530 0.738 247 5623
worked_at 0.788 0.430 0.675 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.830 0.567 0.755 9248 95264
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
rel_ext_data_home_test = os.path.join(
rel_ext_data_home, 'bakeoff-rel_ext-test-data')
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
0.76
# Please enter your score in the scope of the above conditional.
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
print(kb.kb_triples[89])
from collections import Counter
simple_bag_of_words_featurizer(kb.kb_triples[89], corpus, Counter())
class foo:
c = 7
def __init__(self):
self.a = 6
b = 9
k = foo()
print(k.__dict__)
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.853 0.391 0.690 340 5716
author 0.780 0.536 0.715 509 5885
capital 0.724 0.221 0.498 95 5471
contains 0.790 0.608 0.745 3904 9280
film_performance 0.777 0.560 0.721 766 6142
founders 0.777 0.384 0.645 380 5756
genre 0.609 0.165 0.395 170 5546
has_sibling 0.854 0.246 0.572 499 5875
has_spouse 0.845 0.338 0.650 594 5970
is_a 0.688 0.195 0.457 497 5873
nationality 0.670 0.203 0.459 301 5677
parents 0.852 0.535 0.762 312 5688
place_of_birth 0.658 0.206 0.457 233 5609
place_of_death 0.444 0.101 0.264 159 5535
profession 0.722 0.158 0.421 247 5623
worked_at 0.592 0.252 0.466 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.727 0.319 0.557 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.538 Córdoba
2.428 Taluks
2.414 Valais
..... .....
-1.070 capital
-1.158 Afghanistan
-1.349 America
Highest and lowest feature weights for relation author:
2.527 books
2.495 author
2.434 wrote
..... .....
-2.283 directed
-2.748 Alice
-6.999 1865
Highest and lowest feature weights for relation capital:
3.428 capital
1.821 km
1.770 city
..... .....
-1.088 or
-1.194 and
-1.267 also
Highest and lowest feature weights for relation contains:
2.192 bordered
2.074 third-largest
2.073 southwestern
..... .....
-2.691 Mile
-3.440 Midlands
-3.630 Ceylon
Highest and lowest feature weights for relation film_performance:
4.262 starring
3.781 co-starring
3.298 alongside
..... .....
-2.093 historical
-2.097 Iruvar
-3.987 double
Highest and lowest feature weights for relation founders:
4.086 founder
3.777 founded
2.553 co-founded
..... .....
-2.139 novel
-2.382 William
-2.717 Griffith
Highest and lowest feature weights for relation genre:
3.144 series
2.430 movie
2.366 game
..... .....
-1.489 and
-1.519 his
-1.813 at
Highest and lowest feature weights for relation has_sibling:
5.334 brother
3.941 sister
2.883 nephew
..... .....
-1.320 II
-1.378 from
-1.726 Her
Highest and lowest feature weights for relation has_spouse:
5.497 wife
4.586 married
4.411 husband
..... .....
-1.486 American
-2.062 Straus
-2.062 Isidor
Highest and lowest feature weights for relation is_a:
2.241 Genus
2.235 Family
2.229
..... .....
-1.791 birds
-3.442 Talpidae
-5.660 characin
Highest and lowest feature weights for relation nationality:
2.592 born
1.937 becomes
1.866 caliph
..... .....
-1.388 and
-1.740 2010
-1.768 American
Highest and lowest feature weights for relation parents:
4.913 son
4.472 daughter
4.184 father
..... .....
-1.871 Gamal
-2.094 away
-2.471 passes
Highest and lowest feature weights for relation place_of_birth:
3.747 born
2.947 birthplace
2.606 mayor
..... .....
-1.351 or
-1.471 and
-1.477 American
Highest and lowest feature weights for relation place_of_death:
2.103 died
1.879 rebuilt
1.791 prominent
..... .....
-1.129 that
-1.131 as
-1.316 and
Highest and lowest feature weights for relation profession:
2.686
2.279 American
2.212 English
..... .....
-1.227 at
-1.288 about
-1.919 on
Highest and lowest feature weights for relation worked_at:
3.050 professor
2.780 CEO
2.707 founder
..... .....
-1.365 William
-1.421 novel
-1.636 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
import itertools
#print(dict(itertools.islice(glove_lookup.items(), 8)))
def glove_middle_featurizer(kbt, corpus, np_func=np.mean):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.340 0.050 0.157 340 5716
author 0.883 0.356 0.681 509 5885
capital 0.462 0.063 0.204 95 5471
contains 0.578 0.512 0.564 3904 9280
film_performance 0.743 0.268 0.548 766 6142
founders 0.717 0.113 0.347 380 5756
genre 0.625 0.059 0.214 170 5546
has_sibling 0.788 0.082 0.290 499 5875
has_spouse 0.745 0.192 0.473 594 5970
is_a 0.600 0.042 0.165 497 5873
nationality 0.729 0.143 0.400 301 5677
parents 0.835 0.324 0.634 312 5688
place_of_birth 0.733 0.142 0.400 233 5609
place_of_death 0.500 0.019 0.082 159 5535
profession 0.591 0.053 0.194 247 5623
worked_at 0.532 0.136 0.337 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.650 0.160 0.356 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
from sklearn.svm import SVC
svc_model_factory = lambda: SVC(kernel='linear')
glove_svc_results = rel_ext.experiment(
splits,
#AH TMP
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
model_factory=svc_model_factory,
vectorize=False, # Crucial for this featurizer!
verbose=True)
return glove_svc_results
#run_svm_model_factory()
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
#if 'IS_GRADESCOPE_ENV' not in os.environ:
#test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_unigram_results = rel_ext.experiment(
splits,
featurizers=[directional_bag_of_words_featurizer])
print(f"Number of features names that the vectorizer has: {len(directional_unigram_results['vectorizer'].get_feature_names())}")
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in get_tag_bigrams(ex.middle_POS):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in get_tag_bigrams(ex.middle_POS):
feature_counter[word] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = [start_symbol] + get_tags(s) + [end_symbol]
tags = [tags[i-1] + ' ' + tags[i] for i in range(1, len(tags))]
return tags
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
part_of_speech_results = rel_ext.experiment(
splits,
featurizers=[middle_bigram_pos_tag_featurizer])
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in get_synsets(ex.middle_POS):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in get_synsets(ex.middle_POS):
feature_counter[word] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
synsets = [wn.synsets(w.lower(), convert_tag(t)) for w, t in wt]
str_synsets = [str(s) for syn in synsets for s in syn]
return str_synsets
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
synset_results = rel_ext.experiment(
splits,
featurizers=[synset_featurizer], verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# IMPORT ANY MODULES BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: 0.572
# My system creates a very simple bag of words feature dictionary using the context of the full sentences where the
# two entities are jointly mentioned. As a reweighting mechanism it uses a scaled weight that is proportional to the distance
# between the word and its closest entity mention. This is only applicable for the left & right half of the context. For the middle
# it simply uses a scaling of 1, in other words all middle words have the full and equal weight. Additionally it also adds the two
# entity mentions themselves in the feature dictionary. Finally this scaled context window featurizer is uses in both forward & reverse
# for the entity mentions. I played around with a few variants of scaling etc, but this configuration worked best.
# I briefly played around with using Adaboost classifier but that doesn't seem to work as well so I just used
# the default LogisticRegression classifier.
if 'IS_GRADESCOPE_ENV' not in os.environ:
print('start')
def scaled_featurizer(kbt, corpus, feature_counter):
def add_scaled_words(ex, feature_counter):
left_tokens = ex.left.split()
for idx, word in enumerate(left_tokens):
feature_counter[word] += (idx/len(left_tokens))
mid_tokens = ex.middle.split()
for idx, word in enumerate(mid_tokens):
feature_counter[word] += 1
right_tokens = ex.right.split()
for idx, word in enumerate(right_tokens):
feature_counter[word] += (len(right_tokens) - idx)/len(right_tokens)
for mention in ex.mention_1.split():
feature_counter[mention] += 1
for mention in ex.mention_2.split():
feature_counter[mention] += 1
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
add_scaled_words(ex, feature_counter)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
add_scaled_words(ex, feature_counter)
return feature_counter
# Original System # 1
orig_sys_1_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[scaled_featurizer],
verbose=True)
# STOP COMMENT: Please do not remove this comment.
# Original system # 2
# This system was a different one I built that did not work that well.
# The idea here was to simply take trigrams of the full sentence with the 2 mentions,
# look up the GloVe vectors for each word and average them over the trigram
def get_trigrams(s):
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
s = [start_symbol] + s.split() + [end_symbol]
trigrams = [(s[i-2], s[i-1], s[i]) for i in range(2, len(s))]
return trigrams
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def trigram_glove_featurizer(kbt, corpus):
reps = []
dim = len(next(iter(glove_lookup.values())))
def featurize(ex, reps):
sent = ex.left + ex.mention_1 + ex.middle + ex.mention_2 + ex.right
trigrams = get_trigrams(sent)
for t in trigrams:
tws = [np.zeros(dim), np.zeros(dim), np.zeros(dim)]
for i, word in enumerate(t):
tw = glove_lookup.get(word)
if tw is not None:
tws[i] = tw
reps.append(np.sum(tws, axis=0))
return reps
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
reps += featurize(ex, reps)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
reps += featurize(ex, reps)
# A random representation of the right dimensionality if
# nothing was found in GloVe or the corpus.
if len(reps) == 0:
return utils.randvec(n=dim)
else:
return np.mean(reps, axis=0)
# glove_results_1 = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[glove_middle_featurizer],
# vectorize=False, # Crucial for this featurizer!
# verbose=True)
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
rel_ext_data_home_test = os.path.join(
rel_ext_data_home, 'bakeoff-rel_ext-test-data')
orig_sys_bakeoff_results = rel_ext.experiment(
splits,
train_split='all',
test_split='dev',
featurizers=[scaled_featurizer],
verbose=True)
print("-----------------------")
rel_ext.bake_off_experiment(orig_sys_bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
0.587
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
corpus.get_examples_for_entities('Randall_Munroe','xkcd')
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
len(baseline_results['vectorizer'].get_feature_names())
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.537 Córdoba
2.452 Valais
2.421 Taluks
..... .....
-1.113 his
-1.156 had
-2.264 Earth
Highest and lowest feature weights for relation author:
3.251 author
2.911 book
2.800 books
..... .....
-1.998 chapter
-2.068 or
-2.889 Booker
Highest and lowest feature weights for relation capital:
3.269 capital
1.849 posted
1.684 km
..... .....
-1.513 Westminster
-1.558 includes
-1.679 borough
Highest and lowest feature weights for relation contains:
2.580 bordered
2.162 southwestern
2.146 third-largest
..... .....
-2.542 film
-3.076 Midlands
-3.891 Ceylon
Highest and lowest feature weights for relation film_performance:
3.991 starring
3.967 alongside
3.810 co-starring
..... .....
-2.004 She
-2.047 spy
-2.097 then
Highest and lowest feature weights for relation founders:
4.254 founder
3.822 founded
3.782 co-founder
..... .....
-1.383 eventually
-1.820 top
-1.841 band
Highest and lowest feature weights for relation genre:
2.676 series
2.478 movie
2.428 album
..... .....
-1.467 ;
-2.076 at
-2.202 follows
Highest and lowest feature weights for relation has_sibling:
5.261 brother
4.136 sister
2.851 nephew
..... .....
-1.376 bass
-1.409 James
-1.586 President
Highest and lowest feature weights for relation has_spouse:
5.236 wife
4.576 husband
4.357 widow
..... .....
-1.480 owner
-1.901 Straus
-1.901 Isidor
Highest and lowest feature weights for relation is_a:
2.889
2.582 genus
2.519 Genus
..... .....
-1.550 nightshade
-1.658 at
-5.789 characin
Highest and lowest feature weights for relation nationality:
2.518 born
1.913 Pinky
1.909 ruler
..... .....
-1.405 and
-1.706 American
-1.954 1961
Highest and lowest feature weights for relation parents:
5.165 son
4.999 daughter
4.400 father
..... .....
-1.620 played
-1.997 geologist
-2.328 Gamal
Highest and lowest feature weights for relation place_of_birth:
3.732 born
2.948 birthplace
2.923 mayor
..... .....
-1.344 or
-1.495 and
-1.613 Westminster
Highest and lowest feature weights for relation place_of_death:
2.395 died
1.879 under
1.820 rebuilt
..... .....
-1.099 ”
-1.288 and
-1.987 Westminster
Highest and lowest feature weights for relation profession:
3.405
2.555 American
2.539 philosopher
..... .....
-1.277 in
-1.290 at
-1.991 on
Highest and lowest feature weights for relation worked_at:
3.332 professor
2.999 founder
2.950 CEO
..... .....
-1.204 ”
-1.435 1961
-1.650 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
#print("inside function")
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print("inside for loop")
for word in ex.middle.split():
#print("Word: ", word)
rep = glove_lookup.get(word)
#print("Rep: ", rep)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
#print("inside if condition")
dim = len(next(iter(glove_lookup.values())))
#print("dim" ,dim)
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
print(feature_counter)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter = glove_middle_featurizer(kbt, corpus)
feature_counter
###Output
defaultdict(<class 'int'>, {})
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
from sklearn.svm import LinearSVC
model_factory_svm = lambda: LinearSVC(C=1)
svm_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory_svm,
verbose=True)
return svm_results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.725 0.356 0.600 340 5716
author 0.700 0.582 0.672 509 5885
capital 0.431 0.263 0.382 95 5471
contains 0.754 0.620 0.722 3904 9280
film_performance 0.725 0.607 0.698 766 6142
founders 0.643 0.432 0.586 380 5756
genre 0.393 0.259 0.356 170 5546
has_sibling 0.718 0.244 0.517 499 5875
has_spouse 0.819 0.350 0.646 594 5970
is_a 0.520 0.294 0.450 497 5873
nationality 0.400 0.199 0.333 301 5677
parents 0.843 0.567 0.768 312 5688
place_of_birth 0.495 0.236 0.406 233 5609
place_of_death 0.327 0.107 0.232 159 5535
profession 0.489 0.271 0.421 247 5623
worked_at 0.544 0.331 0.482 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.595 0.357 0.517 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
featurizers = [directional_bag_of_words_featurizer]
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
len(baseline_results['vectorizer'].get_feature_names())
baseline_results['vectorizer'].get_feature_names()
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
pos_list = get_tags(ex.middle_POS)
#print(pos_list)
bigram_list = get_tag_bigrams(pos_list)
#print(bigram_list)
for word in bigram_list:
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
pos_list = get_tags(ex.middle_POS)
bigram_list = get_tag_bigrams(pos_list)
for word in bigram_list:
feature_counter[word] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
seq = [start_symbol] + s + [end_symbol]
#print(seq)
bigrams = [i+" "+j for i,j in zip(seq[:-1],seq[1:])]
return bigrams
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
featurizers = [middle_bigram_pos_tag_featurizer]
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
#print(feature_counter)
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
pos_list = get_synsets(ex.middle_POS)
#print(pos_list)
#bigram_list = get_tag_bigrams(pos_list)
#print(bigram_list)
for word in pos_list:
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
pos_list = get_synsets(ex.middle_POS)
#print(pos_list)
#bigram_list = get_tag_bigrams(pos_list)
for word in pos_list:
feature_counter[word] += 1
return feature_counter
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
synset_list = []
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
#print("pairs:" , wt)
for pair in wt:
pos_wn_format = convert_tag(pair[1])
#print("new pos:" , pos_wn_format)
if pos_wn_format != None:
#print("inside if condition")
for element in wn.synsets(pair[0],pos = pos_wn_format):
synset_list.append(str(element))
#print("fetched synsets: " , synset_list)
return synset_list
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
featurizers = [synset_featurizer]
# Call to `rel_ext.experiment`:
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
print("ss: ", ss)
print("Expected:" ,expected)
result = feature_counter[ss]
print("results: ",result)
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
abc = [['is', 'VBZ'], ['a', 'DT'], ['webcomic', 'JJ'], ['created', 'VBN'], ['by', 'IN']]
for a in abc:
print(a[1])
convert_tag(a[1])
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
defaultdict(<class 'int'>, {"Synset('be.v.01')": 5})
pairs: [['is', 'VBZ'], ['a', 'DT'], ['webcomic', 'JJ'], ['created', 'VBN'], ['by', 'IN']]
new pos: v
inside if condition
fetched synsets: ["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')"]
new pos: None
fetched synsets: ["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')"]
new pos: None
fetched synsets: ["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')"]
new pos: v
inside if condition
fetched synsets: ["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')", "Synset('make.v.03')", "Synset('create.v.02')", "Synset('create.v.03')", "Synset('create.v.04')", "Synset('create.v.05')", "Synset('produce.v.02')"]
new pos: None
fetched synsets: ["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')", "Synset('make.v.03')", "Synset('create.v.02')", "Synset('create.v.03')", "Synset('create.v.04')", "Synset('create.v.05')", "Synset('produce.v.02')"]
["Synset('be.v.01')", "Synset('be.v.02')", "Synset('be.v.03')", "Synset('exist.v.01')", "Synset('be.v.05')", "Synset('equal.v.01')", "Synset('constitute.v.01')", "Synset('be.v.08')", "Synset('embody.v.02')", "Synset('be.v.10')", "Synset('be.v.11')", "Synset('be.v.12')", "Synset('cost.v.01')", "Synset('make.v.03')", "Synset('create.v.02')", "Synset('create.v.03')", "Synset('create.v.04')", "Synset('create.v.05')", "Synset('produce.v.02')"]
ss: Synset('be.v.01')
Expected: 6
results: 6
ss: Synset('embody.v.02')
Expected: 1
results: 1
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
#if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
def directional_bag_of_words_featurizer_middle_left_right(kbt, corpus, feature_counter):
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for word in ex.left.split(' '):
feature_counter[word+subject_object_suffix] += 1
for word in ex.right.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
for word in ex.left.split(' '):
feature_counter[word+object_subject_suffix] += 1
for word in ex.right.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
def directional_bag_of_words_featurizer_middle_left_right(kbt, corpus, feature_counter):
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
def middle_bigram_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
word_list = get_middle_bigrams(ex.middle)
#print(word_list)
for word in word_list:
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
word_list = get_middle_bigrams(ex.middle)
for word in word_list:
feature_counter[word] += 1
return feature_counter
def get_middle_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s> "
end_symbol = " </s>"
seq = start_symbol + s + end_symbol
#print("middle sendtence: ", seq)
seq_list = seq.split(' ')
#print("middle sentence list", seq_list)
bigrams = [i for i in zip(seq_list[:-1],seq_list[1:])]
return bigrams
def middle_length_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(kbt.sbj,kbt.obj,kbt.rel,ex)
feature_counter['LENGTH_S_O'] = len(ex.middle.split(' '))
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
feature_counter['LENGTH_O_S'] = len(ex.middle.split(' '))
return feature_counter
def left_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.left.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.left.split(' '):
feature_counter[word] += 1
return feature_counter
def right_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.right.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.right.split(' '):
feature_counter[word] += 1
return feature_counter
def dir_left_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.left.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.left.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
def dir_right_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.right.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.right.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[simple_bag_of_words_featurizer,
directional_bag_of_words_featurizer,
middle_bigram_pos_tag_featurizer,
left_bag_of_words_featurizer,
right_bag_of_words_featurizer,
middle_length_featurizer,
#dir_left_bag_of_words_featurizer,
#dir_right_bag_of_words_featurizer
],
model_factory=lambda: LogisticRegression(fit_intercept=True, solver='liblinear'),
verbose=True)
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
print(feature_counter)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus,feature_counter)
feature_counter
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
#if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baseline](Baseline)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baseline
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.830 0.374 0.667 340 5716
author 0.756 0.554 0.705 509 5885
capital 0.567 0.179 0.395 95 5471
contains 0.802 0.598 0.751 3904 9280
film_performance 0.765 0.569 0.716 766 6142
founders 0.750 0.379 0.627 380 5756
genre 0.580 0.171 0.392 170 5546
has_sibling 0.898 0.230 0.569 499 5875
has_spouse 0.910 0.322 0.666 594 5970
is_a 0.622 0.215 0.451 497 5873
nationality 0.619 0.173 0.408 301 5677
parents 0.860 0.532 0.766 312 5688
place_of_birth 0.633 0.215 0.455 233 5609
place_of_death 0.421 0.101 0.257 159 5535
profession 0.540 0.190 0.395 247 5623
worked_at 0.714 0.248 0.519 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.704 0.316 0.546 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.539 Córdoba
2.496 Taluks
2.400 Valais
..... .....
-1.385 India
-1.403 he
-1.499 who
Highest and lowest feature weights for relation author:
3.086 author
2.698 book
2.456 books
..... .....
-2.024 or
-2.123 directed
-2.998 1852
Highest and lowest feature weights for relation capital:
2.474 capital
1.710 capitals
1.703 km
..... .....
-1.763 million
-1.942 Province
-1.967 Isfahan
Highest and lowest feature weights for relation contains:
2.792 third-largest
2.090 attended
2.086 notably
..... .....
-2.372 film
-2.408 who
-2.694 band
Highest and lowest feature weights for relation film_performance:
4.289 starring
3.554 movie
3.253 co-starring
..... .....
-1.751 Roman
-2.664 Keystone
-3.950 double
Highest and lowest feature weights for relation founders:
3.955 founded
3.908 founder
3.401 co-founder
..... .....
-1.576 novel
-1.792 philosopher
-1.941 music
Highest and lowest feature weights for relation genre:
3.181 series
2.966 album
2.754 movie
..... .....
-1.267 well
-1.389 and
-1.943 at
Highest and lowest feature weights for relation has_sibling:
5.035 brother
4.116 sister
2.847 Marlon
..... .....
-1.453 alongside
-1.501 city
-1.946 Her
Highest and lowest feature weights for relation has_spouse:
5.034 wife
4.389 husband
4.354 widow
..... .....
-1.471 children
-1.812 44
-2.193 friend
Highest and lowest feature weights for relation is_a:
3.330
2.555 Genus
2.532 vocalist
..... .....
-1.480 on
-1.572 at
-5.921 characin
Highest and lowest feature weights for relation nationality:
2.738 born
1.871 caliph
1.819 Pinky
..... .....
-1.404 part
-1.477 American
-1.704 2010
Highest and lowest feature weights for relation parents:
5.114 son
4.899 daughter
4.201 father
..... .....
-1.650 when
-1.748 played
-1.966 Jahangir
Highest and lowest feature weights for relation place_of_birth:
3.921 born
3.021 birthplace
2.527 mayor
..... .....
-1.377 or
-1.482 and
-2.214 Oldham
Highest and lowest feature weights for relation place_of_death:
2.598 died
1.982 rebuilt
1.945 son
..... .....
-1.225 destroyed
-1.287 and
-1.460 Siege
Highest and lowest feature weights for relation profession:
3.843
2.770 vocalist
2.494 American
..... .....
-1.281 from
-1.492 are
-1.975 on
Highest and lowest feature weights for relation worked_at:
3.050 professor
2.874 CEO
2.853 president
..... .....
-1.235 first
-1.283 critique
-1.658 or
###Markdown
Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A call to `rel_ext.experiment` training on the 'train' part of `splits` and assessing on its `dev` part, with `featurizers` as defined above in this notebook and the `model_factory` set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values.
###Code
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.875 0.391 0.701 340 5716
author 0.781 0.527 0.712 509 5885
capital 0.625 0.211 0.448 95 5471
contains 0.799 0.597 0.749 3904 9280
film_performance 0.792 0.560 0.731 766 6142
founders 0.799 0.387 0.659 380 5756
genre 0.667 0.188 0.442 170 5546
has_sibling 0.907 0.234 0.576 499 5875
has_spouse 0.892 0.318 0.655 594 5970
is_a 0.693 0.209 0.474 497 5873
nationality 0.584 0.196 0.418 301 5677
parents 0.891 0.526 0.782 312 5688
place_of_birth 0.610 0.202 0.434 233 5609
place_of_death 0.483 0.088 0.255 159 5535
profession 0.636 0.198 0.441 247 5623
worked_at 0.719 0.264 0.535 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.735 0.319 0.563 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.474 Córdoba
2.445 Taluks
2.417 Valais
..... .....
-1.479 America
-1.503 who
-2.338 Earth
Highest and lowest feature weights for relation author:
2.744 books
2.270 writer
2.249 by
..... .....
-2.886 1945
-2.998 1818
-6.984 1865
Highest and lowest feature weights for relation capital:
3.482 capital
1.890 city
1.728 especially
..... .....
-1.295 new
-1.507 Antrim
-1.515 Westminster
Highest and lowest feature weights for relation contains:
2.218 notably
2.131 third-largest
1.934 districts
..... .....
-2.776 Mile
-3.305 6th
-4.029 Antrim
Highest and lowest feature weights for relation film_performance:
4.080 starring
3.854 alongside
3.626 co-starring
..... .....
-1.901 Malice
-1.929 Wonderland
-1.990 Westminster
Highest and lowest feature weights for relation founders:
3.817 founder
3.687 founded
2.770 formed
..... .....
-1.951 novel
-2.391 William
-2.733 Griffith
Highest and lowest feature weights for relation genre:
2.955 series
2.747 album
2.650
..... .....
-1.423 and
-1.648 ;
-1.838 at
Highest and lowest feature weights for relation has_sibling:
5.093 brother
4.251 sister
2.965 nephew
..... .....
-1.326 starring
-1.504 alongside
-1.605 singer-songwriter
Highest and lowest feature weights for relation has_spouse:
4.956 wife
4.781 husband
4.550 married
..... .....
-1.934 Straus
-1.934 Isidor
-2.537 friend
Highest and lowest feature weights for relation is_a:
3.047
2.453 Genus
2.351 genus
..... .....
-1.644 on
-1.745 emperor
-5.010 characin
Highest and lowest feature weights for relation nationality:
2.814 born
2.015 ruler
1.892 Pinky
..... .....
-1.623 part
-1.625 ;
-1.652 American
Highest and lowest feature weights for relation parents:
4.939 son
4.543 daughter
4.271 father
..... .....
-1.565 half
-2.188 Kelly
-2.654 Indian
Highest and lowest feature weights for relation place_of_birth:
3.969 born
2.615 mayor
2.218 b
..... .....
-1.396 or
-1.439 and
-1.528 Indian
Highest and lowest feature weights for relation place_of_death:
2.374 died
2.003 assassinated
1.929 where
..... .....
-1.169 ;
-1.239 and
-1.924 Westminster
Highest and lowest feature weights for relation profession:
3.656
2.600 American
2.228 replaced
..... .....
-1.279 novel
-1.327 newcomer
-2.278 on
Highest and lowest feature weights for relation worked_at:
3.345 CEO
3.123 professor
2.860 head
..... .....
-1.305 part
-1.362 novel
-1.607 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.852 0.456 0.726 340 5716
author 0.869 0.442 0.728 509 5885
capital 0.556 0.211 0.418 95 5471
contains 0.655 0.409 0.585 3904 9280
film_performance 0.845 0.321 0.637 766 6142
founders 0.822 0.232 0.545 380 5756
genre 0.450 0.053 0.180 170 5546
has_sibling 0.878 0.230 0.562 499 5875
has_spouse 0.916 0.350 0.692 594 5970
is_a 0.730 0.147 0.407 497 5873
nationality 0.670 0.216 0.472 301 5677
parents 0.894 0.404 0.719 312 5688
place_of_birth 0.607 0.219 0.448 233 5609
place_of_death 0.462 0.113 0.286 159 5535
profession 0.717 0.154 0.414 247 5623
worked_at 0.708 0.260 0.527 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.727 0.264 0.522 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
from sklearn.svm import SVC
return rel_ext.experiment(
splits,
featurizers=featurizers,
model_factory = (lambda: SVC(
kernel='linear'
))
)
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.830 0.359 0.657 340 5716
author 0.731 0.591 0.698 509 5885
capital 0.643 0.284 0.513 95 5471
contains 0.784 0.607 0.741 3904 9280
film_performance 0.750 0.612 0.718 766 6142
founders 0.734 0.429 0.643 380 5756
genre 0.603 0.241 0.464 170 5546
has_sibling 0.816 0.240 0.552 499 5875
has_spouse 0.851 0.347 0.659 594 5970
is_a 0.630 0.274 0.500 497 5873
nationality 0.528 0.186 0.386 301 5677
parents 0.843 0.587 0.775 312 5688
place_of_birth 0.526 0.215 0.408 233 5609
place_of_death 0.378 0.088 0.228 159 5535
profession 0.618 0.275 0.495 247 5623
worked_at 0.626 0.298 0.513 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.681 0.352 0.559 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
results = rel_ext.experiment(
splits,
featurizers=[directional_bag_of_words_featurizer]
)
##### YOUR CODE HERE
len(results['vectorizer'].feature_names_)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for tag_bigrams in get_tag_bigrams(ex.middle_POS):
feature_counter[tag_bigrams] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for tag_bigrams in get_tag_bigrams(ex.middle_POS):
feature_counter[tag_bigrams] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
ret = []
prev_tag = start_symbol
for tag in get_tags(s) + [end_symbol]:
ret.append(prev_tag + ' ' + tag)
prev_tag = tag
return ret
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
results = rel_ext.experiment(
splits,
featurizers=[middle_bigram_pos_tag_featurizer]
)
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
ret = []
for word, tag in wt:
ret += wn.synsets(word, pos=convert_tag(tag))
return [str(synset) for synset in ret]
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
results = rel_ext.experiment(
splits,
featurizers=[synset_featurizer]
)
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
# In this system, we just use a RandomForestClassifier and the glove featurizer
def experiment():
from sklearn.ensemble import RandomForestClassifier
return rel_ext.experiment(
splits,
featurizers=[glove_middle_featurizer],
model_factory = (lambda: RandomForestClassifier(
n_estimators=100
)),
vectorize=False,
verbose=True)
results = experiment()
print(results)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.845 0.450 0.719 340 5716
author 0.916 0.409 0.734 509 5885
capital 0.824 0.147 0.429 95 5471
contains 0.606 0.508 0.583 3904 9280
film_performance 0.816 0.354 0.647 766 6142
founders 0.823 0.171 0.467 380 5756
genre 0.611 0.065 0.227 170 5546
has_sibling 0.789 0.202 0.500 499 5875
has_spouse 0.870 0.316 0.645 594 5970
is_a 0.766 0.145 0.412 497 5873
nationality 0.771 0.213 0.506 301 5677
parents 0.918 0.359 0.700 312 5688
place_of_birth 0.870 0.172 0.480 233 5609
place_of_death 0.471 0.050 0.176 159 5535
profession 0.800 0.146 0.422 247 5623
worked_at 0.764 0.227 0.519 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.779 0.246 0.510 9248 95264
{'featurizers': [<function glove_middle_featurizer at 0x0000024C8E87F168>], 'vectorizer': None, 'models': {'adjoins': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'author': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'capital': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'contains': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'film_performance': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'founders': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'genre': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'has_sibling': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'has_spouse': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'is_a': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'nationality': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'parents': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'place_of_birth': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'place_of_death': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'profession': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False), 'worked_at': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False)}, 'all_relations': ['adjoins', 'author', 'capital', 'contains', 'film_performance', 'founders', 'genre', 'has_sibling', 'has_spouse', 'is_a', 'nationality', 'parents', 'place_of_birth', 'place_of_death', 'profession', 'worked_at'], 'vectorize': False}
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.841 0.388 0.682 340 5716
author 0.784 0.534 0.717 509 5885
capital 0.607 0.179 0.411 95 5471
contains 0.800 0.597 0.749 3904 9280
film_performance 0.790 0.569 0.733 766 6142
founders 0.801 0.392 0.663 380 5756
genre 0.595 0.147 0.370 170 5546
has_sibling 0.842 0.246 0.568 499 5875
has_spouse 0.865 0.313 0.640 594 5970
is_a 0.669 0.227 0.482 497 5873
nationality 0.625 0.166 0.403 301 5677
parents 0.862 0.519 0.761 312 5688
place_of_birth 0.681 0.210 0.470 233 5609
place_of_death 0.586 0.107 0.309 159 5535
profession 0.613 0.198 0.432 247 5623
worked_at 0.718 0.252 0.524 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.730 0.315 0.557 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.544 Córdoba
2.430 Taluks
2.262 Valais
..... .....
-1.145 who
-1.550 Europe
-1.664 America
Highest and lowest feature weights for relation author:
2.934 author
2.613 wrote
2.374 books
..... .....
-2.620 Alice
-6.069 dystopian
-6.974 1865
Highest and lowest feature weights for relation capital:
2.720 capital
1.737 km
1.658 capitals
..... .....
-1.219 and
-1.241 North
-1.566 ’
Highest and lowest feature weights for relation contains:
2.889 third-largest
2.431 bordered
2.279 County
..... .....
-2.264 Malaysian
-2.355 band
-2.402 who
Highest and lowest feature weights for relation film_performance:
4.170 starring
3.729 co-starring
3.475 alongside
..... .....
-1.965 Malice
-1.966 Wonderland
-2.165 Iruvar
Highest and lowest feature weights for relation founders:
4.432 founder
3.623 founded
3.509 co-founder
..... .....
-1.729 band
-1.837 novel
-2.540 Bauhaus
Highest and lowest feature weights for relation genre:
2.964 album
2.721
2.633 series
..... .....
-1.395 during
-1.547 and
-1.903 at
Highest and lowest feature weights for relation has_sibling:
5.186 brother
4.249 sister
2.928 nephew
..... .....
-1.183 including
-1.227 Great
-1.291 starring
Highest and lowest feature weights for relation has_spouse:
4.990 wife
4.464 husband
4.456 widow
..... .....
-1.299 assassinated
-1.403 on
-1.633 grandson
Highest and lowest feature weights for relation is_a:
3.394 family
3.185
2.354 order
..... .....
-1.541 at
-1.622 emperor
-3.865 widespread
Highest and lowest feature weights for relation nationality:
2.746 born
2.077 president
1.901 Pinky
..... .....
-1.626 2010
-1.646 report
-1.781 American
Highest and lowest feature weights for relation parents:
5.076 son
4.605 daughter
4.270 father
..... .....
-2.100 filmmaker
-2.500 Gamal
-2.840 Indian
Highest and lowest feature weights for relation place_of_birth:
3.805 born
3.262 birthplace
2.779 mayor
..... .....
-1.433 or
-1.456 and
-1.932 Indian
Highest and lowest feature weights for relation place_of_death:
2.174 died
1.988 where
1.890 rebuilt
..... .....
-1.050 as
-1.149 or
-1.250 and
Highest and lowest feature weights for relation profession:
3.752
2.594 American
2.199 philosopher
..... .....
-1.223 at
-1.408 York
-2.017 on
Highest and lowest feature weights for relation worked_at:
3.321 president
3.042 professor
3.029 CEO
..... .....
-1.554 Bauhaus
-1.674 report
-1.679 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.833 0.471 0.722 340 5716
author 0.832 0.446 0.709 509 5885
capital 0.519 0.147 0.345 95 5471
contains 0.650 0.410 0.582 3904 9280
film_performance 0.804 0.321 0.618 766 6142
founders 0.722 0.239 0.515 380 5756
genre 0.500 0.059 0.200 170 5546
has_sibling 0.821 0.238 0.551 499 5875
has_spouse 0.855 0.357 0.668 594 5970
is_a 0.720 0.135 0.386 497 5873
nationality 0.648 0.189 0.436 301 5677
parents 0.874 0.401 0.707 312 5688
place_of_birth 0.653 0.210 0.460 233 5609
place_of_death 0.485 0.101 0.275 159 5535
profession 0.667 0.146 0.389 247 5623
worked_at 0.762 0.264 0.554 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.709 0.258 0.507 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
from sklearn.svm import SVC
def run_svm_model_factory():
##### YOUR CODE HERE
# return rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[glove_middle_featurizer],
# model_factory=lambda: SVC(kernel='linear', max_iter=3000),
# vectorize=False,
# verbose=True)
return rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[simple_bag_of_words_featurizer],
model_factory=lambda: SVC(kernel='linear', max_iter=3000),
verbose=True)
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
/opt/conda/lib/python3.6/site-packages/sklearn/svm/_base.py:231: ConvergenceWarning: Solver terminated early (max_iter=3000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
% self.max_iter, ConvergenceWarning)
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
verbose=True)
print('Number of model features:', directional_results['models']['adjoins'].coef_.shape[1])
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for bg in get_tag_bigrams(ex.middle_POS):
feature_counter[bg] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for bg in get_tag_bigrams(ex.middle_POS):
feature_counter[bg] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = [start_symbol]+get_tags(s)+[end_symbol]
bigrams = []
for i in range(len(tags)-1):
bigrams.append(tags[i]+' '+tags[i+1])
return bigrams
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
# import nltk
# nltk.download('wordnet')
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for bg in get_synsets(ex.middle_POS):
feature_counter[bg] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for bg in get_synsets(ex.middle_POS):
feature_counter[bg] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
return [str(syn) for word, pos in wt for syn in wn.synsets(word, pos=convert_tag(pos))]
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
"""
For our final solution we used a feature ensemble of bidirectional bag of words and trigram level POS bag of words.
This net us an F-score of .65 (P=0.798, R=0.397) using Logistic Regression.
We tried other feature combinations of n-gram level POS strings, glove embeddings, and average or bag of lengths middle length featurization.
We tried an SVM classifier with an rbf kernel, however it only reached an F-score of .53 (due to poor recall) even with 15000 max iterations.
"""
def middle_trigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for bg in get_tag_trigrams(ex.middle_POS):
feature_counter[bg] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for bg in get_tag_trigrams(ex.middle_POS):
feature_counter[bg] += 1
return feature_counter
def get_tag_trigrams(s):
"""Suggested helper method for `middle_trigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS trigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = [start_symbol]+get_tags(s)+[end_symbol]
trigrams = []
for i in range(len(tags)-2):
trigrams.append(tags[i]+' '+tags[i+1]+' '+tags[i+2])
return trigrams
directional_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_trigram_pos_tag_featurizer, directional_bag_of_words_featurizer],
verbose=True)
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# My peak score was: 0.65\n
# Please do not remove this comment.
# SVM attempt
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[middle_trigram_pos_tag_featurizer, directional_bag_of_words_featurizer],
# model_factory=lambda: SVC(max_iter=15000),
# verbose=True)
# Only Trigram features
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[middle_trigram_pos_tag_featurizer],
# verbose=True)
# 4 gram features
# def middle_4gram_pos_tag_featurizer(kbt, corpus, feature_counter):
# ##### YOUR CODE HERE
# for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
# for bg in get_tag_4grams(ex.middle_POS):
# feature_counter[bg] += 1
# for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
# for bg in get_tag_4grams(ex.middle_POS):
# feature_counter[bg] += 1
# return feature_counter
# def get_tag_4grams(s):
# """Suggested helper method for `middle_trigram_pos_tag_featurizer`.
# This should be defined so that it returns a list of str, where each
# element is a POS trigram."""
# # The values of `start_symbol` and `end_symbol` are defined
# # here so that you can use `test_middle_bigram_pos_tag_featurizer`.
# start_symbol = "<s>"
# end_symbol = "</s>"
# ##### YOUR CODE HERE
# tags = [start_symbol]+get_tags(s)+[end_symbol]
# ngrams = []
# for i in range(len(tags)-3):
# ngrams.append(tags[i]+' '+tags[i+1]+' '+tags[i+2]+' '+tags[i+3])
# return ngrams
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[middle_4gram_pos_tag_featurizer],
# verbose=True)
# # bigram and directional_bag_of_words
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[middle_bigram_pos_tag_featurizer, directional_bag_of_words_featurizer],
# verbose=True)
# bag of middle length featurizer
# def middle_length_featurizer(kbt, corpus, feature_counter):
# ##### YOUR CODE HERE
# total = 0
# count = 0
# for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
# feature_counter[str(len(ex.middle.split()))] += 1
# # total += len(ex.middle.split())
# # count += 1
# for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
# feature_counter[str(len(ex.middle.split()))] += 1
# # total += len(ex.middle.split())
# # count += 1
# # if count == 0:
# # div = 0
# # else:
# # div = total/count
# # feature_counter['average_length'] += div
# return feature_counter
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[middle_trigram_pos_tag_featurizer, directional_bag_of_words_featurizer, middle_length_featurizer],
# verbose=True)
#Unvectorized GLoVE featurizer combined with middle trigram and directional bag of words
# def glove_slow_featurizer(kbt, corpus, feature_counter, np_func=np.sum):
# reps = []
# for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
# for word in ex.middle.split():
# rep = glove_lookup.get(word)
# if rep is not None:
# reps.append(rep)
# # A random representation of the right dimensionality if the
# # example happens not to overlap with GloVe's vocabulary:
# if len(reps) == 0:
# dim = len(next(iter(glove_lookup.values())))
# vec = utils.randvec(n=dim)
# else:
# vec = np_func(reps, axis=0)
# feature_counter.update({'glove_feat_'+str(i): v for i, v in enumerate(vec.flatten().tolist())})
# return feature_counter
# directional_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[glove_slow_featurizer, middle_trigram_pos_tag_featurizer, directional_bag_of_words_featurizer],
# verbose=True)
#GLoVE featurizers trying to capture directionality
# def glove_middle_forward_reverse_featurizer(kbt, corpus, np_func=np.sum):
# reps = []
# for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
# for word in ex.middle.split():
# rep = glove_lookup.get(word)
# if rep is not None:
# reps.append(rep)
# reps2 = []
# for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
# for word in ex.middle.split():
# rep = glove_lookup.get(word)
# if rep is not None:
# reps2.append(rep)
# # A random representation of the right dimensionality if the
# # example happens not to overlap with GloVe's vocabulary:
# if len(reps) == 0:
# dim = len(next(iter(glove_lookup.values())))
# v1 = utils.randvec(n=dim)
# else:
# v1 = np_func(reps, axis=0)
# if len(reps2) == 0:
# dim = len(next(iter(glove_lookup.values())))
# v2 = utils.randvec(n=dim)
# else:
# v2 = np_func(reps2, axis=0)
# return np.concatenate((v1,v2), axis=-1)
# def glove_middle_both_featurizer(kbt, corpus, np_func=np.sum):
# reps = []
# for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
# for word in ex.middle.split():
# rep = glove_lookup.get(word)
# if rep is not None:
# reps.append(rep)
# for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
# for word in ex.middle.split():
# rep = glove_lookup.get(word)
# if rep is not None:
# reps.append(rep)
# # A random representation of the right dimensionality if the
# # example happens not to overlap with GloVe's vocabulary:
# if len(reps) == 0:
# dim = len(next(iter(glove_lookup.values())))
# return utils.randvec(n=dim)
# else:
# return np_func(reps, axis=0)
# glove_results = rel_ext.experiment(
# splits,
# train_split='train',
# test_split='dev',
# featurizers=[glove_middle_reverse_featurizer],
# vectorize=False, # Crucial for this featurizer!
# verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.773 0.462 0.681 340 5716
author 0.807 0.583 0.750 509 5885
capital 0.548 0.358 0.496 95 5471
contains 0.800 0.662 0.768 3904 9280
film_performance 0.812 0.614 0.762 766 6142
founders 0.697 0.437 0.623 380 5756
genre 0.440 0.235 0.375 170 5546
has_sibling 0.760 0.255 0.544 499 5875
has_spouse 0.767 0.387 0.641 594 5970
is_a 0.634 0.219 0.460 497 5873
nationality 0.512 0.216 0.402 301 5677
parents 0.783 0.532 0.716 312 5688
place_of_birth 0.527 0.249 0.431 233 5609
place_of_death 0.310 0.164 0.263 159 5535
profession 0.494 0.166 0.354 247 5623
worked_at 0.661 0.306 0.536 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.645 0.365 0.550 9248 95264
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
rel_ext_data_home_test = os.path.join(
rel_ext_data_home, 'bakeoff-rel_ext-test-data')
bakeoff_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_trigram_pos_tag_featurizer, directional_bag_of_words_featurizer],
verbose=True)
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
0.636
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Julio Amador from Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.0, 0.8, 0.2],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.912 0.365 0.701 340 5716
author 0.807 0.550 0.738 509 5885
capital 0.485 0.168 0.352 95 5471
contains 0.795 0.599 0.747 3904 9280
film_performance 0.778 0.570 0.725 766 6142
founders 0.756 0.400 0.642 380 5756
genre 0.538 0.165 0.370 170 5546
has_sibling 0.852 0.242 0.567 499 5875
has_spouse 0.822 0.318 0.624 594 5970
is_a 0.692 0.235 0.499 497 5873
nationality 0.565 0.173 0.389 301 5677
parents 0.868 0.548 0.777 312 5688
place_of_birth 0.623 0.206 0.444 233 5609
place_of_death 0.488 0.132 0.317 159 5535
profession 0.600 0.194 0.423 247 5623
worked_at 0.716 0.260 0.530 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.706 0.320 0.553 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
return rel_ext.experiment(
splits,
train_split='train',
model_factory=(lambda : SVC(
kernel='linear')),
test_split='dev',
featurizers=[simple_bag_of_words_featurizer],
#vectorize=False, # Crucial for this featurizer!
verbose=False
)
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + '_SO'] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + '_OS'] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
'The/DT dog/N napped/V'.split('/')
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
pos_list = get_tag_bigrams(ex.middle_POS)
for tag in pos_list:
feature_counter[tag] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
pos_list = get_tag_bigrams(ex.middle_POS)
for tag in pos_list:
feature_counter[tag] += 1
feature_counter.pop('</s>', None)
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
pos_tags = get_tags(s)
pos_tags.insert(0, start_symbol)
pos_tags.append(end_symbol)
return [' '.join(pos_tags[i:i+2]) for i in range(len(pos_tags))]
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
synset_list = get_synsets(ex.middle_POS)
for syn in synset_list:
feature_counter[syn] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
synset_list = get_synsets(ex.middle_POS)
for syn in synset_list:
feature_counter[syn] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
synset_list = []
for i in wt:
word = i[0]
tag = convert_tag(i[1])
synset = wn.synsets(word, pos=tag)
synset = list(map(lambda x:str(x), synset))
synset_list.extend(synset)
return synset_list
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
from sklearn.ensemble import GradientBoostingClassifier
rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer, middle_bigram_pos_tag_featurizer],
model_factory=lambda: GradientBoostingClassifier(),
verbose=True)
# Enter your system description in this cell.
# Please do not remove this comment.
'''
The model selected used sklearn's Gradient Boosting as model_factory as it might be more robust to
to noisy data, hence helping distance learning.
Moreover, I decided to include both directional BoW and POS_TAG features; the first one because
directionality seems to help the most -- it may be the case that, given that relations aren't
symmetric, adding those tags help to disambiguate relations. The second as it marginally helps
performance; very likely because they also help in disambiguating.
'''
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
from sklearn.ensemble import GradientBoostingClassifier
bakeoff_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer, middle_bigram_pos_tag_featurizer],
model_factory=lambda: GradientBoostingClassifier(),
verbose=False)
rel_ext_data_home_test = os.path.join(
rel_ext_data_home, 'bakeoff-rel_ext-test-data')
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
0.608
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baseline](Baseline)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baseline
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A call to `rel_ext.experiment` training on the 'train' part of `splits` and assessing on its `dev` part, with `featurizers` as defined above in this notebook and the `model_factory` set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values.
###Code
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.2],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.886 0.388 0.705 340 5716
author 0.747 0.544 0.695 509 5885
capital 0.680 0.179 0.436 95 5471
contains 0.805 0.598 0.753 3904 9280
film_performance 0.784 0.568 0.728 766 6142
founders 0.792 0.382 0.652 380 5756
genre 0.544 0.182 0.389 170 5546
has_sibling 0.820 0.246 0.560 499 5875
has_spouse 0.917 0.337 0.682 594 5970
is_a 0.690 0.219 0.483 497 5873
nationality 0.651 0.179 0.427 301 5677
parents 0.850 0.526 0.756 312 5688
place_of_birth 0.653 0.210 0.460 233 5609
place_of_death 0.562 0.113 0.314 159 5535
profession 0.553 0.190 0.400 247 5623
worked_at 0.648 0.244 0.487 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.724 0.319 0.558 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.507 Córdoba
2.453 Taluks
2.453 Valais
..... .....
-1.126 Granada
-1.205 America
-2.577 Earth
Highest and lowest feature weights for relation author:
2.807 author
2.640 books
2.603 wrote
..... .....
-2.293 or
-2.812 infamous
-4.036 1945
Highest and lowest feature weights for relation capital:
3.091 capital
1.852 especially
1.692 posted
..... .....
-1.973 ~3.9
-1.982 pop
-1.982 million
Highest and lowest feature weights for relation contains:
2.866 third-largest
2.279 bordered
2.201 suburb
..... .....
-2.503 who
-2.846 Mile
-4.224 Antrim
Highest and lowest feature weights for relation film_performance:
4.019 co-starring
3.972 starring
3.693 alongside
..... .....
-1.877 Keystone
-1.948 Wonderland
-1.948 Malice
Highest and lowest feature weights for relation founders:
4.003 founded
3.892 founder
3.634 co-founder
..... .....
-1.722 Bauhaus
-1.772 band
-1.919 writing
Highest and lowest feature weights for relation genre:
3.014 series
2.650 game
2.556
..... .....
-1.453 ;
-1.641 reality
-1.882 at
Highest and lowest feature weights for relation has_sibling:
5.030 brother
4.052 sister
2.622 half-brother
..... .....
-1.494 engineer
-1.975 formed
-2.225 Her
Highest and lowest feature weights for relation has_spouse:
5.073 wife
4.730 married
4.433 widow
..... .....
-1.737 engineer
-2.003 Straus
-2.003 Isidor
Highest and lowest feature weights for relation is_a:
3.180
2.524 family
2.435 order
..... .....
-1.600 York
-1.660 about
-3.520 Bombus
Highest and lowest feature weights for relation nationality:
2.879 born
1.966 ruler
1.903 caliph
..... .....
-1.418 American
-1.439 Mughal
-1.522 or
Highest and lowest feature weights for relation parents:
5.139 son
4.558 daughter
4.185 father
..... .....
-1.885 Gamal
-2.057 Jolie
-2.247 VIII
Highest and lowest feature weights for relation place_of_birth:
4.008 born
3.135 birthplace
2.863 mayor
..... .....
-1.292 I
-1.386 or
-1.464 and
Highest and lowest feature weights for relation place_of_death:
2.464 died
1.896 assassinated
1.861 where
..... .....
-1.241 and
-1.376 Siege
-1.467 ”
Highest and lowest feature weights for relation profession:
3.716
2.680 American
2.303 philosopher
..... .....
-1.411 are
-1.571 York
-2.106 on
Highest and lowest feature weights for relation worked_at:
3.636 CEO
3.019 professor
2.962 president
..... .....
-1.345 ”
-1.413 father
-1.806 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.874 0.468 0.744 340 5716
author 0.820 0.420 0.689 509 5885
capital 0.636 0.221 0.463 95 5471
contains 0.665 0.415 0.593 3904 9280
film_performance 0.823 0.328 0.632 766 6142
founders 0.745 0.216 0.500 380 5756
genre 0.423 0.065 0.201 170 5546
has_sibling 0.805 0.240 0.548 499 5875
has_spouse 0.855 0.357 0.668 594 5970
is_a 0.682 0.151 0.400 497 5873
nationality 0.687 0.226 0.488 301 5677
parents 0.896 0.413 0.726 312 5688
place_of_birth 0.716 0.206 0.479 233 5609
place_of_death 0.500 0.107 0.288 159 5535
profession 0.587 0.150 0.371 247 5623
worked_at 0.742 0.285 0.562 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.716 0.267 0.522 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
from sklearn.svm import SVC
model_svc = lambda: SVC(kernel='linear')
svc_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_svc,
verbose=True)
return svc_results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.775 0.365 0.633 340 5716
author 0.703 0.605 0.681 509 5885
capital 0.737 0.295 0.567 95 5471
contains 0.786 0.604 0.741 3904 9280
film_performance 0.759 0.627 0.729 766 6142
founders 0.777 0.439 0.673 380 5756
genre 0.523 0.271 0.441 170 5546
has_sibling 0.791 0.234 0.536 499 5875
has_spouse 0.843 0.352 0.659 594 5970
is_a 0.618 0.270 0.491 497 5873
nationality 0.521 0.206 0.399 301 5677
parents 0.825 0.590 0.764 312 5688
place_of_birth 0.547 0.202 0.407 233 5609
place_of_death 0.400 0.113 0.265 159 5535
profession 0.544 0.251 0.441 247 5623
worked_at 0.588 0.289 0.487 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.671 0.357 0.557 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
result = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
nber_features = len(result['vectorizer'].get_feature_names())
print('The vectorizer has {} feature names'.format(nber_features))
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
def get_bigram_count(example, feature_counter):
sentence = example.middle_POS.strip() + " </s>"
prev_tag = "<s>"
for word in sentence.split(' '):
if not word:
continue
tag = word if word == "</s>" else word.split('/')[1]
bigram = prev_tag + " " + tag
prev_tag = tag
feature_counter[bigram] += 1
return feature_counter
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
feature_counter = get_bigram_count(ex, feature_counter)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
feature_counter = get_bigram_count(ex, feature_counter)
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
result = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
"""
This feature function is just like simple_bag_of_words_featurizer except that it returns a list of Wordnet synsets
derived from kbt.middle_POS. The synsets have been turned into a string so that we can use them as keys in a
dictionary.
"""
# Get all examples in corpus for kbt independently of the subject/object relative position
all_examples = corpus.get_examples_for_entities(kbt.sbj, kbt.obj).copy()
all_examples += corpus.get_examples_for_entities(kbt.obj, kbt.sbj).copy()
for ex in all_examples:
synsets = get_synsets(ex.middle_POS)
for synset in synsets:
feature_counter[synset] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
# Converts the POS tags so that they can be used by WordNet.
wt_wn = [[pair[0], convert_tag(pair[1])] for pair in wt]
# list all "stringified" synsets associated to each [word, tag] in wt_wn
synsets = [str(synset) for pair in wt_wn for synset in wn.synsets(pair[0], pos=pair[1])]
return synsets
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
result = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
#
# The split used to train and test the classifier is the default split proposed in the notebook:
# split_fracs=[0.01, 0.79, 0.20].
#
# The classifier used is the LogisticRegression with the following parameters:
# fit_intercept=True,
# solver='liblinear',
# multi_class='auto',
# penalty='l1',
#. C=1.2.
#
# Featurizer:
# The Featurizer used is a modified version of simple_bag_of_words_featurizers,
# including bigrams and trigrams as well. A bigram being "word1 word2" and a trigram
# "word1 word2 word3". It also uses a directional token feature - a token being a unigram,
# a bigram or a trigram - similar to the one used in the notebook by adding a subject-object
# suffix or a object-subject suffix to all the tokens.
# The function used for that purpose is ngrams_featurizer with parameters type='trigrams'
# and directional=True.
# My peak score was: 0.66
if 'IS_GRADESCOPE_ENV' not in os.environ:
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
utils.fix_random_seeds()
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
def get_bigrams_count(sentence, feature_counter, suffix):
new_sentence = sentence + " </s>"
prev_word = "<s>"
for word in new_sentence.strip().split(' '):
if not word:
continue
bigram = prev_word + " " + word
prev_word = word
if word != "</s>":
feature_counter[word + suffix] += 1
feature_counter[bigram + suffix] += 1
return feature_counter
def get_trigrams_count(sentence, feature_counter, suffix):
new_sentence = sentence + " </s> </s>"
prev_word = "<s>"
prev_bigram = "<s> <s>"
for word in new_sentence.strip().split(' '):
if not word:
continue
trigram = prev_bigram + " " + word
bigram = prev_word + " " + word
prev_bigram = bigram
prev_word = word
if word not in ["<s>", "</s>"]:
feature_counter[word + suffix] += 1
if bigram not in ["<s> <s>", "</s> </s>"]:
feature_counter[bigram + suffix] += 1
feature_counter[trigram + suffix] += 1
return feature_counter
def ngrams_featurizer(kbt, corpus, feature_counter, type, directional=False):
"""
Modified version of simple_bag_of_words_featurizers, include bigrams or trigrams as well depending on 'type'.
:param kbt:
:param corpus:
:param feature_counter:
:param type: 'bigrams' or 'trigrams
:param directional: indicates if we take into account the order of subject and object.
:return:
"""
subject_object_suffix = "_SO" if directional else ""
object_subject_suffix = "_OS" if directional else ""
if type == 'bigrams':
ngrams = get_bigrams_count
elif type == 'trigrams':
ngrams = get_trigrams_count
else:
raise Exception('type needs to be bigrams or trigrams.')
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
feature_counter = ngrams(ex.middle, feature_counter, subject_object_suffix)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
feature_counter = ngrams(ex.middle, feature_counter, object_subject_suffix)
return feature_counter
def rel_ext_model(dataset):
"""
:param dataset: rel_ext.Dataset
:return:
dictionary: {
'featurizers': [list of feature functions],
'vectorizer': DictVectorizer used to convert feature dicts in matrices
'models': {relation: fitted model}
'all_relations': [list of relations considered]
'vectorize': bool # indicates if DictVectorizer is needed.
}
"""
splits = dataset.build_splits(split_fracs=[0.01, 0.79, 0.20])
lr = lambda: LogisticRegression(fit_intercept=True,
solver='liblinear',
multi_class='auto',
penalty='l1',
C=1.2, max_iter=150)
grams3_direct_featurizer = lambda *args: ngrams_featurizer(*args, type='trigrams', directional=True)
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[grams3_direct_featurizer],
model_factory=lr,
verbose=True)
return baseline_results
model_result = rel_ext_model(dataset)
# STOP COMMENT: Please do not remove this comment.
###Output
/Users/gdegournay/opt/anaconda3/envs/nlu/lib/python3.7/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
bakeoff_results = model_result
rel_ext_data_home_test = os.path.join(rel_ext_data_home, 'bakeoff-rel_ext-test-data')
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
0.656
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.857 0.388 0.690 340 5716
author 0.786 0.521 0.714 509 5885
capital 0.655 0.200 0.450 95 5471
contains 0.798 0.593 0.747 3904 9280
film_performance 0.784 0.569 0.729 766 6142
founders 0.785 0.403 0.659 380 5756
genre 0.543 0.147 0.353 170 5546
has_sibling 0.867 0.248 0.579 499 5875
has_spouse 0.846 0.323 0.639 594 5970
is_a 0.722 0.219 0.495 497 5873
nationality 0.637 0.169 0.411 301 5677
parents 0.862 0.538 0.769 312 5688
place_of_birth 0.613 0.197 0.432 233 5609
place_of_death 0.654 0.107 0.323 159 5535
profession 0.618 0.170 0.405 247 5623
worked_at 0.667 0.240 0.492 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.731 0.315 0.555 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.536 Córdoba
2.411 Valais
2.341 Taluks
..... .....
-1.272 India
-1.332 6th
-1.468 century
Highest and lowest feature weights for relation author:
2.590 wrote
2.447 author
2.333 by
..... .....
-2.866 1852
-3.908 17th
-5.685 dystopian
Highest and lowest feature weights for relation capital:
3.575 capital
1.717 posted
1.660 especially
..... .....
-1.280 state
-1.539 includes
-2.131 Madras
Highest and lowest feature weights for relation contains:
2.501 bordered
2.159 attended
2.038 Ontario
..... .....
-2.390 who
-3.158 Midlands
-3.456 6th
Highest and lowest feature weights for relation film_performance:
4.272 starring
3.677 co-starring
3.566 opposite
..... .....
-1.711 compatible
-1.729 fully
-1.777 Gandolfini
Highest and lowest feature weights for relation founders:
3.942 founder
3.859 founded
3.737 co-founder
..... .....
-1.692 Ramakrishna
-1.692 Math
-1.758 philosopher
Highest and lowest feature weights for relation genre:
2.805 series
2.653 album
2.515 movie
..... .....
-1.505 ;
-1.770 reality
-2.061 at
Highest and lowest feature weights for relation has_sibling:
5.302 brother
4.218 sister
2.765 Marlon
..... .....
-1.332 from
-1.333 James
-2.114 Her
Highest and lowest feature weights for relation has_spouse:
5.248 wife
4.580 husband
4.485 widow
..... .....
-1.305 Tyndareus
-1.356 Jeremy
-1.365 children
Highest and lowest feature weights for relation is_a:
2.771 family
2.654 order
2.537 philosopher
..... .....
-1.581 at
-1.990 Corvus
-2.133 birds
Highest and lowest feature weights for relation nationality:
2.731 born
1.946 caliph
1.881 Pinky
..... .....
-1.446 or
-1.482 coast
-1.589 American
Highest and lowest feature weights for relation parents:
5.538 son
5.057 father
4.301 daughter
..... .....
-1.572 Sonam
-1.631 announced
-1.988 Jolie
Highest and lowest feature weights for relation place_of_birth:
3.852 born
2.733 mayor
2.699 birthplace
..... .....
-1.261 coast
-1.395 or
-1.515 and
Highest and lowest feature weights for relation place_of_death:
2.656 died
2.041 son
1.940 assassinated
..... .....
-1.141 or
-1.164 coast
-1.312 and
Highest and lowest feature weights for relation profession:
2.732
2.475 philosopher
2.317 American
..... .....
-1.275 at
-1.387 ?
-1.890 on
Highest and lowest feature weights for relation worked_at:
2.976 head
2.933 professor
2.829 president
..... .....
-1.239 coast
-1.264 art
-1.708 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.885 0.429 0.730 340 5716
author 0.859 0.418 0.710 509 5885
capital 0.568 0.221 0.432 95 5471
contains 0.658 0.410 0.587 3904 9280
film_performance 0.808 0.330 0.627 766 6142
founders 0.833 0.237 0.554 380 5756
genre 0.750 0.053 0.206 170 5546
has_sibling 0.848 0.246 0.570 499 5875
has_spouse 0.839 0.352 0.657 594 5970
is_a 0.712 0.159 0.420 497 5873
nationality 0.667 0.193 0.447 301 5677
parents 0.878 0.417 0.719 312 5688
place_of_birth 0.658 0.215 0.466 233 5609
place_of_death 0.529 0.113 0.305 159 5535
profession 0.654 0.138 0.374 247 5623
worked_at 0.721 0.256 0.529 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.742 0.262 0.521 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
from sklearn.svm import LinearSVC, SVC
def run_svm_model_factory():
##### YOUR CODE HERE
model_factory = lambda: LinearSVC()
svc_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
model_factory=model_factory,
vectorize=False, # Crucial for this featurizer!
verbose=True)
return svc_results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/svm/_base.py:977: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_bag_of_words_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
vectorize=True,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for tag_bigram in get_tag_bigrams(get_tags(ex.middle_POS)):
feature_counter[tag_bigram] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for tag_bigram in get_tag_bigrams(get_tags(ex.middle_POS)):
feature_counter[tag_bigram] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
bigrams = []
padded_s = [start_symbol, *s, end_symbol]
for i in range(len(padded_s) - 1):
bigrams.append(' '.join(padded_s[i:i+2]))
return bigrams
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
middle_bigram_pos_tag_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
vectorize=True,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for syn in get_synsets(ex.middle_POS):
feature_counter[syn] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for syn in get_synsets(ex.middle_POS):
feature_counter[syn] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
result = []
for (word, tag) in wt:
synsets = wn.synsets(word, pos=convert_tag(tag))
for syn in synsets:
result.append(str(syn))
return result
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
synset_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
vectorize=True,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# 1) My approach combines the directional bag of words featurizer with the synset featurizer and applies a MLP classifier.
# 2) The code
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
def custom_model_factory():
clf = MLPClassifier(solver='lbfgs', hidden_layer_sizes=(64, 32), random_state=1, verbose=True, max_iter=100)
#pipe = make_pipeline(StandardScaler(with_mean=False), clf)
#return pipe
return clf
def custom_system():
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer, synset_featurizer],
model_factory=custom_model_factory,
vectorize=True,
verbose=True)
return results
# My peak score was: 0.625
if 'IS_GRADESCOPE_ENV' not in os.environ:
custom_system()
# STOP COMMENT: Please do not remove this comment.
###Output
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
/Users/david/anaconda3/lib/python3.6/site-packages/sklearn/neural_network/_multilayer_perceptron.py:471: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter)
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
bakeoff_splits = dataset.build_splits(
split_names=['train', 'dev'],
split_fracs=[0.95, 0.05],
seed=1)
bakeoff_results = rel_ext.experiment(
bakeoff_splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer, synset_featurizer],
model_factory=custom_model_factory,
verbose=False) # We don't care about this eval, so skip its summary.
rel_ext_data_home_test = os.path.join(rel_ext_data_home, 'bakeoff-rel_ext-test-data')
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
0.629
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import utils, itertools
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.840 0.462 0.722 340 5716
author 0.858 0.440 0.721 509 5885
capital 0.586 0.179 0.403 95 5471
contains 0.655 0.416 0.588 3904 9280
film_performance 0.836 0.319 0.631 766 6142
founders 0.818 0.237 0.549 380 5756
genre 0.379 0.065 0.192 170 5546
has_sibling 0.788 0.246 0.548 499 5875
has_spouse 0.871 0.354 0.674 594 5970
is_a 0.678 0.161 0.413 497 5873
nationality 0.595 0.219 0.443 301 5677
parents 0.850 0.417 0.703 312 5688
place_of_birth 0.602 0.215 0.442 233 5609
place_of_death 0.400 0.126 0.279 159 5535
profession 0.529 0.146 0.347 247 5623
worked_at 0.700 0.260 0.523 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.687 0.266 0.511 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
model_factory = lambda: SVC(kernel='linear')
svm_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
return svm_results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.821 0.338 0.639 340 5716
author 0.762 0.611 0.726 509 5885
capital 0.630 0.305 0.520 95 5471
contains 0.779 0.599 0.734 3904 9280
film_performance 0.747 0.612 0.715 766 6142
founders 0.722 0.445 0.642 380 5756
genre 0.537 0.212 0.411 170 5546
has_sibling 0.745 0.234 0.519 499 5875
has_spouse 0.858 0.345 0.661 594 5970
is_a 0.595 0.284 0.488 497 5873
nationality 0.557 0.196 0.407 301 5677
parents 0.842 0.596 0.778 312 5688
place_of_birth 0.510 0.215 0.400 233 5609
place_of_death 0.459 0.107 0.277 159 5535
profession 0.593 0.283 0.487 247 5623
worked_at 0.654 0.289 0.522 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.676 0.354 0.558 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def birectional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
word = "".join([word, subject_object_suffix])
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
word = "".join([word, object_subject_suffix])
feature_counter[word] += 1
return feature_counter
birectional_bag_of_words_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[birectional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counters):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
fwd_middle_words = [word for word in ex.middle_POS.split(' ')]
fwd_middle_words = " ".join(fwd_middle_words)
fwd_bigrams = get_tag_bigrams(fwd_middle_words)
for bigram in fwd_bigrams:
feature_counters[bigram] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
rev_middle_words = [word for word in ex.middle_POS.split(' ')]
rev_middle_words = " ".join(rev_middle_words)
rev_bigrams = get_tag_bigrams(rev_middle_words)
for bigram in rev_bigrams:
feature_counters[bigram] += 1
return feature_counters
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
tag_list = get_tags(s)
tags = [start_symbol] + tag_list + [end_symbol]
tag_bigrams = []
for idx in range(len(tags)-1):
bigrams = " ".join(tags[idx:idx+2])
tag_bigrams.append(bigrams)
return tag_bigrams
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
bigram_pos_bag_of_words_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle_POS.split(' '):
fwd_middle_words = [word for word in ex.middle_POS.split(' ')]
fwd_middle_words = " ".join(fwd_middle_words)
fwd_synsets = get_synsets(fwd_middle_words)
fwd_synsets = list(itertools.chain(*fwd_synsets))
for fwd_ss in fwd_synsets:
feature_counter[str(fwd_ss)] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle_POS.split(' '):
rev_middle_words = [word for word in ex.middle_POS.split(' ')]
rev_middle_words = " ".join(rev_middle_words)
rev_synsets = get_synsets(rev_middle_words)
rev_synsets = list(itertools.chain(*rev_synsets))
for rev_ss in rev_synsets:
feature_counter[str(rev_ss)] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
synset = []
for word in wt:
synpos = convert_tag(word[1])
synset.append(wn.synsets(word[0], pos=synpos))
return synset
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
synset_featurizer_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# IMPORT ANY MODULES BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
my_model = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer, birectional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.876 0.394 0.704 340 5716
author 0.858 0.639 0.802 509 5885
capital 0.750 0.253 0.538 95 5471
contains 0.842 0.672 0.801 3904 9280
film_performance 0.850 0.691 0.813 766 6142
founders 0.835 0.426 0.701 380 5756
genre 0.783 0.318 0.605 170 5546
has_sibling 0.830 0.255 0.572 499 5875
has_spouse 0.884 0.359 0.684 594 5970
is_a 0.743 0.326 0.592 497 5873
nationality 0.645 0.236 0.479 301 5677
parents 0.881 0.548 0.786 312 5688
place_of_birth 0.721 0.266 0.537 233 5609
place_of_death 0.574 0.170 0.389 159 5535
profession 0.750 0.340 0.604 247 5623
worked_at 0.818 0.298 0.606 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.790 0.387 0.638 9248 95264
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
from rel_ext import bake_off_experiment
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
from google.colab import drive
drive.mount('/content/drive')
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Fall 2020"
%cd /content/drive/MyDrive/StanfordCS224U/Code/cs224u-github
###Output
/content/drive/MyDrive/StanfordCS224U/Code/cs224u-github
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.863 0.388 0.693 340 5716
author 0.779 0.507 0.704 509 5885
capital 0.719 0.242 0.516 95 5471
contains 0.797 0.593 0.746 3904 9280
film_performance 0.765 0.561 0.713 766 6142
founders 0.788 0.411 0.666 380 5756
genre 0.569 0.171 0.388 170 5546
has_sibling 0.845 0.240 0.562 499 5875
has_spouse 0.863 0.318 0.643 594 5970
is_a 0.659 0.217 0.468 497 5873
nationality 0.592 0.203 0.428 301 5677
parents 0.841 0.526 0.751 312 5688
place_of_birth 0.653 0.202 0.451 233 5609
place_of_death 0.600 0.113 0.323 159 5535
profession 0.576 0.198 0.417 247 5623
worked_at 0.693 0.252 0.513 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.725 0.321 0.561 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.516 Córdoba
2.502 Taluks
2.488 Valais
..... .....
-1.260 joined
-1.503 who
-1.528 he
Highest and lowest feature weights for relation author:
2.415 argued
2.358 books
2.270 writer
..... .....
-3.038 1945
-3.703 17th
-4.758 dystopian
Highest and lowest feature weights for relation capital:
3.736 capital
1.893 especially
1.768 km
..... .....
-1.172 South
-1.195 Its
-2.042 Madras
Highest and lowest feature weights for relation contains:
2.733 third-largest
2.522 bordered
2.031 transferred
..... .....
-2.710 13
-2.775 Uzbekistan
-6.039 Bronx
Highest and lowest feature weights for relation film_performance:
4.273 co-starring
3.915 starring
3.841 opposite
..... .....
-1.773 comedian
-1.872 degree
-2.388 then
Highest and lowest feature weights for relation founders:
4.090 founder
4.025 founded
3.515 co-founder
..... .....
-1.316 author
-1.607 band
-1.805 novel
Highest and lowest feature weights for relation genre:
3.157 album
2.808 series
2.619 movie
..... .....
-1.684 reality
-1.736 starring
-1.863 at
Highest and lowest feature weights for relation has_sibling:
5.427 brother
4.171 sister
2.916 nephew
..... .....
-1.439 alongside
-1.623 Her
-1.703 singer-songwriter
Highest and lowest feature weights for relation has_spouse:
5.106 wife
4.529 husband
4.415 married
..... .....
-1.351 reported
-1.358 which
-1.453 philanthropist
Highest and lowest feature weights for relation is_a:
2.954 family
2.896 philosopher
2.641
..... .....
-1.466 section
-1.704 at
-3.897 widespread
Highest and lowest feature weights for relation nationality:
2.707 born
2.085 ruler
1.871 caliph
..... .....
-1.387 and
-1.544 or
-1.964 American
Highest and lowest feature weights for relation parents:
5.045 son
4.383 daughter
4.147 father
..... .....
-1.508 in
-1.548 played
-2.373 VIII
Highest and lowest feature weights for relation place_of_birth:
3.882 born
2.836 birthplace
2.431 mayor
..... .....
-1.414 or
-1.455 and
-1.587 American
Highest and lowest feature weights for relation place_of_death:
2.227 died
1.932 rebuilt
1.842 where
..... .....
-1.035 anarchist
-1.193 or
-1.249 and
Highest and lowest feature weights for relation profession:
3.178
2.607 philosopher
2.461 American
..... .....
-1.244 series
-1.369 at
-2.135 on
Highest and lowest feature weights for relation worked_at:
3.203 CEO
2.841 professor
2.838 president
..... .....
-1.294 novel
-1.545 Thiruvananthapuram
-1.836 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.841 0.435 0.709 340 5716
author 0.849 0.418 0.704 509 5885
capital 0.759 0.232 0.521 95 5471
contains 0.656 0.422 0.590 3904 9280
film_performance 0.834 0.329 0.638 766 6142
founders 0.728 0.239 0.517 380 5756
genre 0.407 0.065 0.198 170 5546
has_sibling 0.808 0.244 0.553 499 5875
has_spouse 0.859 0.348 0.664 594 5970
is_a 0.688 0.155 0.407 497 5873
nationality 0.615 0.213 0.446 301 5677
parents 0.849 0.413 0.701 312 5688
place_of_birth 0.600 0.219 0.445 233 5609
place_of_death 0.477 0.132 0.313 159 5535
profession 0.559 0.154 0.366 247 5623
worked_at 0.660 0.264 0.508 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.699 0.268 0.518 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
from sklearn.svm import SVC
def run_svm_model_factory():
##### YOUR CODE HERE
res = rel_ext.experiment(
splits,
featurizers=[glove_middle_featurizer],
vectorize=False,
verbose=True,
model_factory=(lambda: SVC(kernel='linear'))
)
return res
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
'''
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
'''
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
res = rel_ext.experiment(
splits,
featurizers=[directional_bag_of_words_featurizer],
)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
corpus.get_examples_for_entities(kbt.obj, kbt.sbj)
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for bg in get_tag_bigrams(ex.middle_POS):
feature_counter[bg] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for bg in get_tag_bigrams(ex.middle_POS):
feature_counter[bg] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
prev_tag = start_symbol
tags = get_tags(s)
res = []
for tag in tags:
res.append(prev_tag + " " + tag)
prev_tag = tag
res.append(prev_tag + " " + end_symbol)
return res
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
# print(lem)
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
# #### YOUR CODE HERE
res = rel_ext.experiment(
splits,
featurizers=[middle_bigram_pos_tag_featurizer],
)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
import nltk
nltk.download('wordnet')
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for synset in get_synsets(ex.middle_POS):
feature_counter[str(synset)] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for synset in get_synsets(ex.middle_POS):
feature_counter[str(synset)] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
res = []
for pair in wt:
word = pair[0]
pos = convert_tag(pair[1])
res.extend(wn.synsets(word, pos=pos))
return res
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
res = rel_ext.experiment(
splits,
featurizers=[synset_featurizer],
)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
[Synset('be.v.01'), Synset('be.v.02'), Synset('be.v.03'), Synset('exist.v.01'), Synset('be.v.05'), Synset('equal.v.01'), Synset('constitute.v.01'), Synset('be.v.08'), Synset('embody.v.02'), Synset('be.v.10'), Synset('be.v.11'), Synset('be.v.12'), Synset('cost.v.01'), Synset('angstrom.n.01'), Synset('vitamin_a.n.01'), Synset('deoxyadenosine_monophosphate.n.01'), Synset('adenine.n.01'), Synset('ampere.n.02'), Synset('a.n.06'), Synset('a.n.07'), Synset('make.v.03'), Synset('create.v.02'), Synset('create.v.03'), Synset('create.v.04'), Synset('create.v.05'), Synset('produce.v.02'), Synset('by.r.01'), Synset('aside.r.06')]
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. We also ask that you report the best score your system got during development, just to help us understand how systems performed overall.
###Code
# PLEASE MAKE SURE TO INCLUDE THE FOLLOWING BETWEEN THE START AND STOP COMMENTS:
# 1) Textual description of your system.
# 2) The code for your original system.
# 3) The score achieved by your system in place of MY_NUMBER.
# With no other changes to that line.
# You should report your score as a decimal value <=1.0
# PLEASE MAKE SURE NOT TO DELETE OR EDIT THE START AND STOP COMMENTS
# NOTE: MODULES, CODE AND DATASETS REQUIRED FOR YOUR ORIGINAL SYSTEM
# SHOULD BE ADDED BELOW THE 'IS_GRADESCOPE_ENV' CHECK CONDITION. DOING
# SO ABOVE THE CHECK MAY CAUSE THE AUTOGRADER TO FAIL.
# START COMMENT: Enter your system description in this cell.
# My peak score was: MY_NUMBER
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# STOP COMMENT: Please do not remove this comment.
!pip install transformers
import torch
from transformers import BertModel, BertTokenizer
import vsm
# Instantiate a Bert model and tokenizer based on `bert_weights_name`:
bert_weights_name = 'bert-base-uncased'
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
##### YOUR CODE HERE
bert_model = BertModel.from_pretrained(bert_weights_name).to(device)
bert_tokenizer = BertTokenizer.from_pretrained(bert_weights_name)
def bert_phi(text):
input_ids = bert_tokenizer.encode(text, add_special_tokens=True)
# print(input_ids)
X = torch.tensor([input_ids]).to(device)
with torch.no_grad():
reps = bert_model(X)
res = reps.last_hidden_state.squeeze(0).detach().cpu().numpy()
return res
def bert_classifier_phi(text):
reps = bert_phi(text)[0]
#return reps.mean(axis=0) # Another good, easy option.
# print(reps.shape)
return reps
def bert_middle_featurizer(kbt, corpus, np_func=np.mean):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
rep = bert_classifier_phi(ex.middle)
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
# print(len(reps), reps[0].shape)
if len(reps) == 0:
dim = 768
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
bert_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[bert_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
'''
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
'''
def directional_bigram_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
prev_word = "<s>"
for word in ex.middle.split(' ') + ["</s>"]:
feature_counter[prev_word+" " +word+subject_object_suffix] += 1
prev_word = word
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
prev_word = "<s>"
for word in ex.middle.split(' ') + ["</s>"]:
feature_counter[prev_word+" " +word+object_subject_suffix] += 1
prev_word = word
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
res = rel_ext.experiment(
splits,
featurizers=[directional_bigram_featurizer],
)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py:947: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
_____no_output_____
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
_____no_output_____
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
_____no_output_____
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
_____no_output_____
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.879 0.385 0.700 340 5716
author 0.828 0.540 0.749 509 5885
capital 0.471 0.168 0.346 95 5471
contains 0.795 0.601 0.747 3904 9280
film_performance 0.764 0.565 0.714 766 6142
founders 0.788 0.382 0.650 380 5756
genre 0.581 0.147 0.365 170 5546
has_sibling 0.855 0.248 0.575 499 5875
has_spouse 0.839 0.325 0.637 594 5970
is_a 0.699 0.219 0.486 497 5873
nationality 0.609 0.186 0.419 301 5677
parents 0.845 0.522 0.752 312 5688
place_of_birth 0.694 0.215 0.480 233 5609
place_of_death 0.472 0.107 0.281 159 5535
profession 0.610 0.190 0.423 247 5623
worked_at 0.714 0.227 0.500 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.715 0.314 0.551 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.539 Valais
2.534 Córdoba
2.325 Taluks
..... .....
-1.258 towers
-1.299 India
-1.343 Europe
Highest and lowest feature weights for relation author:
2.347 wrote
2.325 writer
2.269 books
..... .....
-2.429 1997
-3.011 controversial
-4.496 infamous
Highest and lowest feature weights for relation capital:
3.182 capital
1.765 especially
1.668 city
..... .....
-1.177 also
-1.472 ’
-1.615 includes
Highest and lowest feature weights for relation contains:
2.661 third-largest
2.341 bordered
2.177 districts
..... .....
-2.551 Henley-on-Thames
-3.147 Lancashire
-3.371 Midlands
Highest and lowest feature weights for relation film_performance:
3.836 starring
3.813 co-starring
3.334 alongside
..... .....
-1.570 tragedy
-1.648 or
-1.792 comedian
Highest and lowest feature weights for relation founders:
4.008 founded
3.804 founder
2.571 head
..... .....
-1.765 William
-1.795 writing
-1.983 Griffith
Highest and lowest feature weights for relation genre:
2.841 series
2.708 album
2.473 such
..... .....
-1.358 ;
-1.442 and
-1.874 at
Highest and lowest feature weights for relation has_sibling:
5.400 brother
3.799 sister
2.786 nephew
..... .....
-1.351 starring
-1.361 stars
-1.797 Her
Highest and lowest feature weights for relation has_spouse:
5.227 wife
4.458 husband
4.426 widow
..... .....
-1.626 Sir
-1.983 Straus
-1.983 Isidor
Highest and lowest feature weights for relation is_a:
2.725 genus
2.602 philosopher
2.579 family
..... .....
-2.781 hibiscus
-3.378 Talpidae
-3.953 cat
Highest and lowest feature weights for relation nationality:
2.612 born
1.992 ruler
1.872 caliph
..... .....
-1.552 foreign
-1.700 American
-2.082 state
Highest and lowest feature weights for relation parents:
4.630 daughter
4.446 son
4.278 father
..... .....
-1.993 succeeded
-2.277 passes
-2.797 Indian
Highest and lowest feature weights for relation place_of_birth:
3.758 born
2.886 mayor
2.352 birthplace
..... .....
-1.446 Indian
-1.467 province
-1.722 state
Highest and lowest feature weights for relation place_of_death:
2.320 died
1.894 rebuilt
1.840 assassinated
..... .....
-1.270 ”
-1.278 and
-1.361 state
Highest and lowest feature weights for relation profession:
3.083
2.606 American
2.530 philosopher
..... .....
-1.210 at
-1.249 in
-2.291 on
Highest and lowest feature weights for relation worked_at:
3.028 professor
2.936 head
2.798 CEO
..... .....
-1.691 state
-1.727 or
-1.777 family
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.863 0.462 0.735 340 5716
author 0.846 0.422 0.705 509 5885
capital 0.704 0.200 0.468 95 5471
contains 0.655 0.407 0.584 3904 9280
film_performance 0.821 0.330 0.633 766 6142
founders 0.760 0.242 0.532 380 5756
genre 0.458 0.065 0.207 170 5546
has_sibling 0.861 0.236 0.564 499 5875
has_spouse 0.863 0.350 0.668 594 5970
is_a 0.694 0.155 0.409 497 5873
nationality 0.652 0.199 0.448 301 5677
parents 0.879 0.394 0.705 312 5688
place_of_birth 0.685 0.215 0.476 233 5609
place_of_death 0.410 0.101 0.254 159 5535
profession 0.655 0.154 0.397 247 5623
worked_at 0.720 0.244 0.518 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.720 0.261 0.519 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
from sklearn.svm import SVC
def run_svm_model_factory():
##### YOUR CODE HERE
return rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=lambda : SVC(kernel='linear'))
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.748 0.332 0.599 340 5716
author 0.770 0.599 0.729 509 5885
capital 0.609 0.295 0.502 95 5471
contains 0.783 0.602 0.738 3904 9280
film_performance 0.736 0.619 0.709 766 6142
founders 0.718 0.429 0.633 380 5756
genre 0.606 0.253 0.474 170 5546
has_sibling 0.756 0.248 0.537 499 5875
has_spouse 0.800 0.350 0.636 594 5970
is_a 0.595 0.278 0.484 497 5873
nationality 0.538 0.186 0.391 301 5677
parents 0.797 0.580 0.742 312 5688
place_of_birth 0.650 0.223 0.470 233 5609
place_of_death 0.475 0.119 0.298 159 5535
profession 0.521 0.255 0.431 247 5623
worked_at 0.667 0.298 0.534 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.673 0.354 0.557 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
directional_bow_results = rel_ext.experiment(
splits,
featurizers=[directional_bag_of_words_featurizer],
)
print("Number of features: ", len(directional_bow_results['vectorizer'].vocabulary_))
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for pos_bigram in get_tag_bigrams(ex.middle_POS):
feature_counter[pos_bigram] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for pos_bigram in get_tag_bigrams(ex.middle_POS):
feature_counter[pos_bigram] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = get_tags(s)
return list(map(lambda x: x[0] + " " + x[1], zip([start_symbol]+tags, tags+[end_symbol])))
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
pos_tag_results = rel_ext.experiment(
splits,
featurizers=[middle_bigram_pos_tag_featurizer]
)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
import nltk
nltk.download('wordnet')
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for synset in get_synsets(ex.middle_POS):
feature_counter[synset] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
synsets = []
for word, tag in wt:
for synset in wn.synsets(word, pos=convert_tag(tag)):
synsets.append(str(synset))
return synsets
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
synsets_results = rel_ext.experiment(
splits,
featurizers=[synset_featurizer],
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies. Enter your system description in this cell.This is my code description attached here as requestedI have used the following features for the model:1. Directional BOW features as defined above2. Bigram of POS tags of the middle text as defined above3. Synsets of the middle words and pos tags as defined above4. Average length of the middle for both forward and backward relation (defined below)5. Directional bag of POS tags for both entity mentionsWe also traied different classifiers like SVM (linear and RBF), Logistic regression, Logistic regression with L1 regularization becuase the features required for a particular reltion type must be vry sparse in the whole set of features, Ada Boost classifier (with logistic regression as base estimator), Random Forest and found that the best performing model is the Random Forest classifier giving 78.9. The logistic regression model with L2 regularization achieves a score of 67.0 and the one with L1 regularization achieves a score of 67.8. We also tried the calibarated version of RF so that the class probabilities are better evaluated and found the the model still performs the good with only 0.9 points drop in performance.Note: We also experimented with bigram features did not find any improvement for the added complexity and features.My peak score was: 0.789
###Code
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.calibration import CalibratedClassifierCV
def middle_len_featurizer(kbt, corpus, feature_counter):
middle_len = 0
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
middle_len += len(ex.middle)
feature_counter["middle_len_forward"] = 0
if(len(corpus.get_examples_for_entities(kbt.sbj, kbt.obj)) > 0):
feature_counter["middle_len_forward"] = middle_len/len(corpus.get_examples_for_entities(kbt.sbj, kbt.obj))
middle_len = 0
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
middle_len += len(ex.middle)
feature_counter["middle_len_backward"] = 0
if(len(corpus.get_examples_for_entities(kbt.obj, kbt.sbj)) > 0):
feature_counter["middle_len_backward"] = middle_len/len(corpus.get_examples_for_entities(kbt.obj, kbt.sbj))
return feature_counter
def entity_pos_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for pos in get_tags(ex.mention_1_POS):
feature_counter[pos+"_1_SO"] += 1
for pos in get_tags(ex.mention_2_POS):
feature_counter[pos+"_2_SO"] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for pos in get_tags(ex.mention_1_POS):
feature_counter[pos+"_1_OS"] += 1
for pos in get_tags(ex.mention_2_POS):
feature_counter[pos+"_2_OS"] += 1
return feature_counter
def directional_bag_of_words_bigrams_featurizer(kbt, corpus, feature_counter):
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
words = list(map(lambda x: x+subject_object_suffix, ex.middle.split(' ')))
START_TOKEN = "<START>"
END_TOKEN = "<END>"
for bigram in zip([START_TOKEN]+words, words + [END_TOKEN]):
feature_counter[str(bigram)] += 1
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
words = list(map(lambda x: x+object_subject_suffix, ex.middle.split(' ')))
START_TOKEN = "<START>"
END_TOKEN = "<END>"
for bigram in zip([START_TOKEN]+words, words + [END_TOKEN]):
feature_counter[str(bigram)] += 1
return feature_counter
def find_best_model_factory():
logistic = lambda: LogisticRegression(fit_intercept=True, solver='liblinear', random_state=42, max_iter=200)
logistic_l1 = lambda: LogisticRegression(fit_intercept=True, solver='liblinear', random_state=42, max_iter=200, penalty='l1')
rf = lambda: RandomForestClassifier(n_jobs=-1, random_state=42)
rf_calibrated = lambda: CalibratedClassifierCV(base_estimator=RandomForestClassifier(n_jobs=-1, random_state=42), method='isotonic', cv=5)
adaboost_decision = lambda: AdaBoostClassifier(random_state=42)
adaboost_linear = lambda: AdaBoostClassifier(base_estimator=LogisticRegression(fit_intercept=True, solver='liblinear', random_state=42, max_iter=200), random_state=42)
svm_linear = lambda: SVC(kernel='linear')
svm = lambda: SVC()
models = {}
featurizers = [synset_featurizer, middle_bigram_pos_tag_featurizer,
directional_bag_of_words_featurizer, entity_pos_featurizer, middle_len_featurizer]
best_original_system = None
best_score = 0
best_model = None
for model_factory in [logistic, logistic_l1, rf, rf_calibrated, adaboost_decision, adaboost_linear, svm_linear, svm]:
print(model_factory())
original_system_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
models[model_factory()] = original_system_results
score = original_system_results['score']
if(score > best_score):
best_score = score
best_original_system = original_system_results
best_model = model_factory()
print(best_score, best_model)
return best_score, best_model, best_original_system, models
if 'IS_GRADESCOPE_ENV' not in os.environ:
best_score, best_model, best_original_system, models = find_best_model_factory()
# Please do not remove this comment.
# Examine feature importances for RF classifier
def examine_model_weights(train_result, k=3, verbose=True):
vectorizer = train_result['vectorizer']
if vectorizer is None:
print("Model weights can be examined only if the featurizers "
"are based in dicts (i.e., if `vectorize=True`).")
return
feature_names = vectorizer.get_feature_names()
for rel, model in train_result['models'].items():
print('Highest and lowest feature weights for relation {}:\n'.format(rel))
try:
coefs = model.feature_importances_.toarray()
except AttributeError:
coefs = model.feature_importances_
sorted_weights = sorted([(wgt, idx) for idx, wgt in enumerate(coefs)], reverse=True)
for wgt, idx in sorted_weights[:k]:
print('{:10.3f} {}'.format(wgt, feature_names[idx]))
print('{:>10s} {}'.format('.....', '.....'))
for wgt, idx in sorted_weights[-k:]:
print('{:10.3f} {}'.format(wgt, feature_names[idx]))
print()
if 'IS_GRADESCOPE_ENV' not in os.environ:
examine_model_weights(best_original_system)
def find_new_relation_instances(
splits,
trained_model,
train_split='train',
test_split='dev',
k=10,
vectorize=True,
verbose=True):
train_result = trained_model
test_split = splits[test_split]
neg_o, neg_y = test_split.build_dataset(
include_positive=False,
sampling_rate=0.1)
neg_X, _ = test_split.featurize(
neg_o,
featurizers=train_result['featurizers'],
vectorizer=train_result['vectorizer'],
vectorize=vectorize)
# Report highest confidence predictions:
for rel, model in train_result['models'].items():
print('Highest probability examples for relation {}:\n'.format(rel))
probs = model.predict_proba(neg_X[rel])
probs = [prob[1] for prob in probs] # probability for class True
sorted_probs = sorted([(p, idx) for idx, p in enumerate(probs)], reverse=True)
for p, idx in sorted_probs[:k]:
print('{:10.3f} {}'.format(p, neg_o[rel][idx]))
print()
if 'IS_GRADESCOPE_ENV' not in os.environ:
find_new_relation_instances(splits, best_original_system, k=10)
###Output
Highest probability examples for relation adjoins:
0.912 KBTriple(rel='adjoins', sbj='Kolhapur', obj='Sangli')
0.740 KBTriple(rel='adjoins', sbj='Bandung', obj='Indonesia')
0.720 KBTriple(rel='adjoins', sbj='Caribbean', obj='South_America')
0.690 KBTriple(rel='adjoins', sbj='El_Salvador', obj='Central_America')
0.675 KBTriple(rel='adjoins', sbj='Shanghai', obj='Tianjin')
0.675 KBTriple(rel='adjoins', sbj='Cambridge', obj='Oxford')
0.675 KBTriple(rel='adjoins', sbj='Lahore', obj='Karachi')
0.660 KBTriple(rel='adjoins', sbj='Microsoft_Windows', obj='Linux')
0.650 KBTriple(rel='adjoins', sbj='Homer', obj='Iliad')
0.650 KBTriple(rel='adjoins', sbj='Caribbean', obj='North_America')
Highest probability examples for relation author:
1.000 KBTriple(rel='author', sbj='The_Man_with_the_Golden_Arm', obj='Otto_Preminger')
0.992 KBTriple(rel='author', sbj='Francis_of_Assisi', obj='Nikos_Kazantzakis')
0.990 KBTriple(rel='author', sbj='Anatomy_of_a_Murder', obj='Otto_Preminger')
0.940 KBTriple(rel='author', sbj='The_Last_Wave', obj='Judy_Davis')
0.900 KBTriple(rel='author', sbj='The_Playboy_Club', obj='Boardwalk_Empire')
0.900 KBTriple(rel='author', sbj='The_Rivals', obj='Richard_Brinsley_Sheridan')
0.875 KBTriple(rel='author', sbj='The_Drew_Carey_Show', obj='Drew_Carey')
0.873 KBTriple(rel='author', sbj='The_Brothers_Bloom', obj='Rian_Johnson')
0.850 KBTriple(rel='author', sbj='The_Quest_of_the_Historical_Jesus', obj='Albert_Schweitzer')
0.850 KBTriple(rel='author', sbj="A_People's_History_of_the_United_States", obj='Howard_Zinn')
Highest probability examples for relation capital:
0.850 KBTriple(rel='capital', sbj='Germany', obj='Kiel')
0.800 KBTriple(rel='capital', sbj='Brandenburg', obj='Potsdam')
0.800 KBTriple(rel='capital', sbj='Massachusetts', obj='Cambridge')
0.770 KBTriple(rel='capital', sbj='India', obj='Lucknow')
0.700 KBTriple(rel='capital', sbj='Edmonton', obj='Alberta')
0.664 KBTriple(rel='capital', sbj='East_Germany', obj='Dresden')
0.660 KBTriple(rel='capital', sbj='Germany', obj='Dresden')
0.640 KBTriple(rel='capital', sbj='Adelaide', obj='South_Australia')
0.636 KBTriple(rel='capital', sbj='Lolei', obj='Preah_Ko')
0.630 KBTriple(rel='capital', sbj='Haiti', obj='Folklore')
Highest probability examples for relation contains:
1.000 KBTriple(rel='contains', sbj='Hari_River,_Afghanistan', obj='Minaret_of_Jam')
1.000 KBTriple(rel='contains', sbj='Germany', obj='Kiel')
1.000 KBTriple(rel='contains', sbj='Denmark', obj='Roskilde')
1.000 KBTriple(rel='contains', sbj='Haiti', obj='Folklore')
1.000 KBTriple(rel='contains', sbj='Punjab_region', obj='Multan')
1.000 KBTriple(rel='contains', sbj='Brazil', obj='Jesus_Christ')
1.000 KBTriple(rel='contains', sbj='East_Germany', obj='Dresden')
1.000 KBTriple(rel='contains', sbj='New_York_City', obj='Triangle_Shirtwaist_Factory_fire')
1.000 KBTriple(rel='contains', sbj='California', obj='Santa_Monica_Mountains')
1.000 KBTriple(rel='contains', sbj='Monaco', obj='Wallonia')
Highest probability examples for relation film_performance:
0.998 KBTriple(rel='film_performance', sbj='Museum_Mile,_New_York_City', obj='New_York_City')
0.930 KBTriple(rel='film_performance', sbj='Andrzej_Bartkowiak', obj='Cradle_2_the_Grave')
0.930 KBTriple(rel='film_performance', sbj='Jane_Fonda', obj='Georgia_Rule')
0.870 KBTriple(rel='film_performance', sbj='John_Lithgow', obj='How_I_Met_Your_Mother')
0.870 KBTriple(rel='film_performance', sbj='Michel_Gondry', obj='The_Science_of_Sleep')
0.860 KBTriple(rel='film_performance', sbj='Anthony_Perkins', obj='Drama')
0.856 KBTriple(rel='film_performance', sbj='Drew_Carey', obj='The_Drew_Carey_Show')
0.830 KBTriple(rel='film_performance', sbj='Otto_Preminger', obj='River_of_No_Return')
0.820 KBTriple(rel='film_performance', sbj='Florence_Henderson', obj='The_Brady_Bunch')
0.810 KBTriple(rel='film_performance', sbj='John_Travolta', obj='Saturday_Night_Fever')
Highest probability examples for relation founders:
0.802 KBTriple(rel='founders', sbj='Vienna_General_Hospital', obj='Hans_Asperger')
0.780 KBTriple(rel='founders', sbj='Gary_Becker', obj='University_of_Chicago')
0.753 KBTriple(rel='founders', sbj='Nazi_Party', obj='Joseph_Goebbels')
0.730 KBTriple(rel='founders', sbj='Seymour_Stein', obj='Sire_Records')
0.710 KBTriple(rel='founders', sbj='Debre_Berhan', obj='Shewa')
0.690 KBTriple(rel='founders', sbj='New_York_City', obj='Hilly_Kristal')
0.690 KBTriple(rel='founders', sbj='Stan_Lee', obj='Marvel_Comics')
0.671 KBTriple(rel='founders', sbj='Grover_Cleveland', obj='New_York')
0.671 KBTriple(rel='founders', sbj='Strega_Nona', obj='Tomie_dePaola')
0.640 KBTriple(rel='founders', sbj='Philadelphia_Phillies', obj='Roy_Halladay')
Highest probability examples for relation genre:
1.000 KBTriple(rel='genre', sbj='Sea_Hunt', obj='Television')
0.760 KBTriple(rel='genre', sbj='Westwood_Studios', obj='Real-time_strategy')
0.750 KBTriple(rel='genre', sbj='Manimal', obj='NBC')
0.720 KBTriple(rel='genre', sbj='Deadliest_Warrior', obj='Television')
0.710 KBTriple(rel='genre', sbj='Peter_Potamus', obj='Hanna-Barbera')
0.680 KBTriple(rel='genre', sbj='Video_game_industry', obj='Television')
0.670 KBTriple(rel='genre', sbj='Golf_course', obj='Swimming_pool')
0.660 KBTriple(rel='genre', sbj='Cheers', obj='Television')
0.660 KBTriple(rel='genre', sbj='Bubblegum_Crisis', obj='Anime')
0.650 KBTriple(rel='genre', sbj='Veronica_Mars', obj='UPN')
Highest probability examples for relation has_sibling:
0.940 KBTriple(rel='has_sibling', sbj='Zarina_Wahab', obj='Jimmy_Shergill')
0.890 KBTriple(rel='has_sibling', sbj='Thomas_Bangalter', obj='Guy-Manuel_de_Homem-Christo')
0.830 KBTriple(rel='has_sibling', sbj='County_Clare', obj='County_Tyrone')
0.820 KBTriple(rel='has_sibling', sbj='Bob_Crosby', obj='Humphrey_Bogart')
0.820 KBTriple(rel='has_sibling', sbj='Alfred_Iverson,_Jr.', obj='Lafayette_McLaws')
0.780 KBTriple(rel='has_sibling', sbj='New_York', obj='San_Francisco')
0.779 KBTriple(rel='has_sibling', sbj='Shaanxi', obj='Gansu')
0.756 KBTriple(rel='has_sibling', sbj='Ivan_Boesky', obj='Michael_Milken')
0.750 KBTriple(rel='has_sibling', sbj='San_Diego_County', obj='California')
0.710 KBTriple(rel='has_sibling', sbj='Alexandra_Richards', obj='Keith_Richards')
Highest probability examples for relation has_spouse:
0.975 KBTriple(rel='has_spouse', sbj='Ivan_Boesky', obj='Michael_Milken')
0.928 KBTriple(rel='has_spouse', sbj='Paulo_Coelho', obj='Mark_Cuban')
0.920 KBTriple(rel='has_spouse', sbj='Tom_Baker', obj='Jon_Pertwee')
0.912 KBTriple(rel='has_spouse', sbj='Barbara_Jordan', obj='Hillary_Rodham_Clinton')
0.868 KBTriple(rel='has_spouse', sbj='Robert_Rodriguez', obj='Quentin_Tarantino')
0.822 KBTriple(rel='has_spouse', sbj='Sean_Penn', obj='Billy_Bob_Thornton')
0.800 KBTriple(rel='has_spouse', sbj='Freddie_Mac', obj='Fannie_Mae')
0.740 KBTriple(rel='has_spouse', sbj='Michael_Ontkean', obj='Lindsay_Crouse')
0.734 KBTriple(rel='has_spouse', sbj='Emily_Osment', obj='Mitchel_Musso')
0.701 KBTriple(rel='has_spouse', sbj='Theodore_Roosevelt', obj='William_Howard_Taft')
Highest probability examples for relation is_a:
0.990 KBTriple(rel='is_a', sbj='Terence_Winter', obj='Screenwriter')
0.980 KBTriple(rel='is_a', sbj='Serinus', obj='Bird')
0.870 KBTriple(rel='is_a', sbj='Golf_course', obj='Swimming_pool')
0.858 KBTriple(rel='is_a', sbj='Gary', obj='Snail')
0.850 KBTriple(rel='is_a', sbj='Bird', obj='Rallidae')
0.815 KBTriple(rel='is_a', sbj='Rise_and_Fall_of_the_City_of_Mahagonny', obj='Operetta')
0.801 KBTriple(rel='is_a', sbj='Application_software', obj='Web_development')
0.773 KBTriple(rel='is_a', sbj='Science_fiction', obj='Cyberpunk')
0.750 KBTriple(rel='is_a', sbj='Homs', obj='Daraa')
0.750 KBTriple(rel='is_a', sbj='Mass_media', obj='Entertainment')
Highest probability examples for relation nationality:
0.965 KBTriple(rel='nationality', sbj='Richard_Wagner', obj='Lohengrin')
0.923 KBTriple(rel='nationality', sbj='The_Championship_Course', obj='Putney')
0.918 KBTriple(rel='nationality', sbj='Dwight_Clark', obj='San_Francisco_49ers')
0.918 KBTriple(rel='nationality', sbj='Ed_Belfour', obj='Toronto_Maple_Leafs')
0.918 KBTriple(rel='nationality', sbj='Pau_Gasol', obj='Los_Angeles_Lakers')
0.914 KBTriple(rel='nationality', sbj='Jim_McDermott', obj='Washington')
0.840 KBTriple(rel='nationality', sbj='Family_Radio', obj='California')
0.840 KBTriple(rel='nationality', sbj='Hans_Asperger', obj='Vienna_General_Hospital')
0.820 KBTriple(rel='nationality', sbj='Ellen_DeGeneres', obj='American_Idol')
0.785 KBTriple(rel='nationality', sbj='William_Pitt_the_Younger', obj='Kingdom_of_Great_Britain')
Highest probability examples for relation parents:
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baselines](Baselines) 1. [Hand-build feature functions](Hand-build-feature-functions) 1. [Distributed representations](Distributed-representations)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import numpy as np
import os
import rel_ext
from sklearn.linear_model import LogisticRegression
import utils
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baselines Hand-build feature functions
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.876 0.374 0.690 340 5716
author 0.763 0.532 0.702 509 5885
capital 0.596 0.295 0.495 95 5471
contains 0.798 0.594 0.747 3904 9280
film_performance 0.777 0.563 0.722 766 6142
founders 0.791 0.397 0.660 380 5756
genre 0.588 0.176 0.401 170 5546
has_sibling 0.867 0.234 0.563 499 5875
has_spouse 0.870 0.327 0.653 594 5970
is_a 0.679 0.217 0.477 497 5873
nationality 0.600 0.189 0.419 301 5677
parents 0.852 0.535 0.762 312 5688
place_of_birth 0.635 0.202 0.444 233 5609
place_of_death 0.500 0.107 0.288 159 5535
profession 0.560 0.190 0.403 247 5623
worked_at 0.685 0.252 0.510 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.715 0.324 0.558 9248 95264
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.555 Taluks
2.533 Córdoba
2.496 Valais
..... .....
-1.397 Lancashire
-1.423 he
-1.446 who
Highest and lowest feature weights for relation author:
2.966 books
2.775 author
2.392 by
..... .....
-2.868 Daisy
-3.534 infamous
-3.546 17th
Highest and lowest feature weights for relation capital:
3.529 capital
1.905 city
1.620 posted
..... .....
-1.097 Roman
-1.179 and
-1.778 includes
Highest and lowest feature weights for relation contains:
2.313 bordered
2.057 Turks
2.017 third-largest
..... .....
-2.688 Mile
-3.169 Lancashire
-3.814 Ceylon
Highest and lowest feature weights for relation film_performance:
4.157 starring
3.528 opposite
3.435 co-starring
..... .....
-1.863 Gandolfini
-2.008 fully
-2.025 compatible
Highest and lowest feature weights for relation founders:
3.892 founded
3.780 founder
3.588 co-founder
..... .....
-1.691 Bauhaus
-1.822 design
-2.155 band
Highest and lowest feature weights for relation genre:
2.850 series
2.573
2.566 game
..... .....
-1.444 and
-1.923 at
-2.305 follows
Highest and lowest feature weights for relation has_sibling:
5.104 brother
4.177 sister
3.049 nephew
..... .....
-1.430 alongside
-1.519 Her
-1.723 singer-songwriter
Highest and lowest feature weights for relation has_spouse:
5.276 wife
4.418 widow
4.200 husband
..... .....
-1.355 on
-1.728 grandson
-1.864 44
Highest and lowest feature weights for relation is_a:
3.274
2.625 genus
2.556 family
..... .....
-1.698 at
-1.800 emperor
-3.967 cat
Highest and lowest feature weights for relation nationality:
2.654 born
1.885 caliph
1.866 Pinky
..... .....
-1.443 or
-1.710 American
-1.831 2010
Highest and lowest feature weights for relation parents:
5.084 son
4.746 daughter
4.271 father
..... .....
-1.924 congenitally
-1.974 Jahangir
-2.904 Indian
Highest and lowest feature weights for relation place_of_birth:
3.712 born
2.803 birthplace
2.721 mayor
..... .....
-1.341 or
-1.486 and
-1.783 Indian
Highest and lowest feature weights for relation place_of_death:
2.388 died
1.972 rebuilt
1.864 executed
..... .....
-1.278 and
-1.347 Siege
-1.419 ”
Highest and lowest feature weights for relation profession:
3.803
2.402 American
2.274 feminist
..... .....
-1.358 emperor
-1.436 at
-2.145 on
Highest and lowest feature weights for relation worked_at:
3.232 professor
3.231 CEO
2.985 head
..... .....
-1.480 family
-1.567 father
-1.667 or
###Markdown
Distributed representationsThis simple baseline sums the GloVe vector representations for all of the words in the "middle" span and feeds those representations into the standard `LogisticRegression`-based `model_factory`. The crucial parameter that enables this is `vectorize=False`. This essentially says to `rel_ext.experiment` that your featurizer or your model will do the work of turning examples into vectors; in that case, `rel_ext.experiment` just organizes these representations by relation type.
###Code
GLOVE_HOME = os.path.join('data', 'glove.6B')
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def glove_middle_featurizer(kbt, corpus, np_func=np.sum):
reps = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split():
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
# A random representation of the right dimensionality if the
# example happens not to overlap with GloVe's vocabulary:
if len(reps) == 0:
dim = len(next(iter(glove_lookup.values())))
return utils.randvec(n=dim)
else:
return np_func(reps, axis=0)
glove_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[glove_middle_featurizer],
vectorize=False, # Crucial for this featurizer!
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.831 0.462 0.716 340 5716
author 0.839 0.440 0.710 509 5885
capital 0.594 0.200 0.426 95 5471
contains 0.656 0.402 0.582 3904 9280
film_performance 0.806 0.326 0.623 766 6142
founders 0.798 0.229 0.533 380 5756
genre 0.412 0.041 0.147 170 5546
has_sibling 0.866 0.246 0.576 499 5875
has_spouse 0.879 0.343 0.670 594 5970
is_a 0.740 0.149 0.412 497 5873
nationality 0.750 0.179 0.458 301 5677
parents 0.915 0.413 0.736 312 5688
place_of_birth 0.676 0.215 0.473 233 5609
place_of_death 0.395 0.107 0.257 159 5535
profession 0.686 0.142 0.388 247 5623
worked_at 0.721 0.256 0.529 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.723 0.259 0.515 9248 95264
###Markdown
With the same basic code design, one can also use the PyTorch models included in the course repo, or write new ones that are better aligned with the task. For those models, it's likely that the featurizer will just return a list of tokens (or perhaps a list of lists of tokens), and the model will map those into vectors using an embedding. Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A wrapper function `run_svm_model_factory` that does the following: 1. Uses `rel_ext.experiment` with the model factory set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values. 1. Trains on the 'train' part of `splits`.1. Assesses on the `dev` part of `splits`.1. Uses `featurizers` as defined above. 1. Returns the return value of `rel_ext.experiment` for this set-up.The function `test_run_svm_model_factory` will check that your function conforms to these general specifications.
###Code
def run_svm_model_factory():
##### YOUR CODE HERE
from sklearn.svm import SVC
model_factory = lambda: SVC(kernel='linear')
print(splits)
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
return results
def test_run_svm_model_factory(run_svm_model_factory):
results = run_svm_model_factory()
assert 'featurizers' in results, \
"The return value of `run_svm_model_factory` seems not to be correct"
# Check one of the models to make sure it's an SVC:
assert 'SVC' in results['models']['adjoins'].__class__.__name__, \
"It looks like the model factor wasn't set to use an SVC."
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_run_svm_model_factory(run_svm_model_factory)
###Output
{'tiny': Corpus with 3,474 examples; KB with 445 triples, 'train': Corpus with 263,285 examples; KB with 36,191 triples, 'dev': Corpus with 64,937 examples; KB with 9,248 triples, 'all': Corpus with 331,696 examples; KB with 45,884 triples}
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.722 0.359 0.600 340 5716
author 0.745 0.613 0.714 509 5885
capital 0.667 0.295 0.532 95 5471
contains 0.785 0.602 0.740 3904 9280
film_performance 0.760 0.625 0.729 766 6142
founders 0.730 0.434 0.643 380 5756
genre 0.595 0.294 0.494 170 5546
has_sibling 0.815 0.238 0.549 499 5875
has_spouse 0.835 0.342 0.648 594 5970
is_a 0.597 0.278 0.486 497 5873
nationality 0.573 0.196 0.414 301 5677
parents 0.826 0.580 0.762 312 5688
place_of_birth 0.617 0.215 0.449 233 5609
place_of_death 0.404 0.119 0.274 159 5535
profession 0.529 0.255 0.436 247 5623
worked_at 0.590 0.298 0.493 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.675 0.359 0.560 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word+subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word+object_subject_suffix] += 1
#print(feature_counter)
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
model_factory=lambda: LogisticRegression(fit_intercept=True, solver='liblinear'),
verbose=True)
print("done")
vectorizer = results['vectorizer']
feature_names = vectorizer.get_feature_names()
feature_counts = len(feature_names)
print(feature_counts)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_directional_bag_of_words_featurizer(corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
#print(kbt)
#print(kbt.sbj+" "+ kbt.obj)
s = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(ex.middle_POS)
s.append(ex.middle_POS)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
#print(ex.middle_POS)
s.append(ex.middle_POS)
#print(kbt.middle())
#s = "The/DT dog/N napped/V"
if len(s)==0:
return feature_counter
for item in s:
tag_bigrams = get_tag_bigrams(item)
for pair in tag_bigrams:
feature_counter[pair] +=1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
tags = get_tags(s)
tags.insert(0, start_symbol)
tags.append(end_symbol)
tag_bigrams = []
for i in range(len(tags)-1):
pair = tags[i]+" "+tags[i+1]
tag_bigrams.append(pair)
return tag_bigrams
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=lambda: LogisticRegression(fit_intercept=True, solver='liblinear'),
verbose=True)
print("done")
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_middle_bigram_pos_tag_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
from nltk.corpus import wordnet as wn
def synset_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
s = []
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
#print(ex.middle_POS)
s.append(ex.middle_POS)
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
#print(ex.middle_POS)
s.append(ex.middle_POS)
if len(s)==0:
return feature_counter
for item in s:
res = get_synsets(item)
for pair in res:
feature_counter[pair] +=1
#print(feature_counter)
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
#print(s)
#print(wt)
synsets = []
for pair in wt:
pos = convert_tag(pair[1])
synlist = wn.synsets(pair[0], pos=pos)
for item in synlist:
res = str(item)
synsets.append(res)
#print(res)
return synsets
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'j':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=lambda: LogisticRegression(fit_intercept=True, solver='liblinear'),
verbose=True)
print("done")
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_synset_featurizer(corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
"""
Below lines contain my thought process to build the system.
If you search for "Final system does the following:" it will lead you
to the final pipeline description
Some observations about f-score from previous cells:
baseline bow featurizer = 0.558
GloVe = 0.515
svm = 0.560
directional_uni = 0.605
middle_bigram_pos_tag_featurizer = 0.434
synset_featurizer = 0.526
I ran experiment with directional glove logisitc:
Glove_directional=0.547
From directional_bag_of_words & glove directional with logistic
Seems that directional features improved performace.
This makes sense because not all relations are symetric
So I evolved my system with gloves to a directional_glove logistic with (sum,min,max) features (feature count 300*6=1800) concataned
the score was: precision 0.503 recall 0.429 f-score 0.484
With GLove unidirectional logistic (sum,min,max) features (feature count 300*3=900)
the score was precision=0.611 recall 0.299 f-score 0.497:
With GLove directional logistic (min,max) features (feature count 300*2*2=1200)
the score was: precision 0.507 recall 0.359 f-score 0.465
Therefore after a certain point the model wasn't doing well with glove
Boosting with glove directional
the score was: precision 0.599 recall 0.339 f-score 0.511
Boosting with directional sum, min, max features resulted in
macro-average precision 0.612 recall 0.357 f-score 0.525
0.525
Boosting on terse
non-directional 0.708 0.182 0.437
directional 0.685 0.199 0.441
Boosting gave really poor results on many different combinations
Word POS as features
SVC terse non-directional macro-average 0.664 0.221 0.467
SVC terse directional macro-average 0.665 0.221 0.467
Uni Directional logistic 10K features 0.442 ? 0.726 0.199 0.453
Directional logistic macro-average 0.726 0.199 0.453
feature_counts 10713
From this it was evident that expanding features beyond a certain point is not resulting in great improvements
TO make the learning more robust we need to cut down features.
In an ideal situation you expect fimiliar words to give the information given unseen
data with new Proper Nouns. So we could essentially cut out proper nouns, prevent overfitting learn more
meaningful features and achieve faster convergence.
Directional middle
SVC
macro-average 0.666 0.224 0.470
Logistic
macro-average 0.726 0.198 0.451
## Adding features from left & right decrease performace and result in feature explosion
Directional all i.e. left, middle, POS:
logistic
macro-average 0.621 0.211 0.425
feature_counts 77176
SVC macro-average 0.520 0.215 0.395
So I decided to stick to middle features.
I used glove to replace the information lost because of nouns and use a hybrid BOW-POS-Embedding architecture
Mid Directional terse+glove features
macro-average Logistic 0.662 0.251 0.483
macro-average LinearSVC 0.616 0.297 0.500
### Final system does the following:
1) Parse in direction aware manner
2) Replace determiners with Determiner tag (replace numbers with the POS tag)
3) Replace Proper noun with embedding (glove) this adds better semantic information
4) Other words are added as is
5) The Glove vectors from all the Proper nouns are summed and are directional
6) Use a support vector based classifier (Linear SVC) to get more robust seperation
My peak score was: 0.500
"""
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.svm import LinearSVC
from collections import defaultdict
import pdb
uuid = "cf444e85-cf8b-4379-8f03-915c3293f9b8" #used as feature prefix when using glove with BOW
#def glove_featurizer(kbt, corpus, np_func=[np.sum, np.min, np.max]):
def glove_featurizer(kbt, corpus, np_func=[np.sum, np.min, np.max]):
directional = True
final_reps = []
for func in np_func:
#reps_so = []
#reps_os = []
so_feature = glove_middle_featurizer(kbt, corpus, func)
final_reps.append(so_feature)
if directional:
kbt_os = rel_ext.KBTriple(rel=kbt.rel, sbj=kbt.obj, obj=kbt.sbj)
os_feature = glove_middle_featurizer(kbt_os, corpus, func)
#final_reps.append(np.concatenate((so_feature, os_feature), axis=None))
final_reps.append( os_feature)
final_rep = np.concatenate(final_reps, axis=None)
#print(final_rep.shape)
return final_rep
print("Executing experiment")
###############################################
def get_words(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[0] for lem in s.strip().split(' ') if lem]
def get_terse_sentence(word_pos):
#print("middle_pos")
#print(middle_pos)
tags = get_tags(word_pos)
words = get_words(word_pos)
new_sent = []
#s = word_pos.split(' ')
reps = []
for i in range(len(tags)-1):
tag = tags[i]
# https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
# http://www.nltk.org/book/ch05.html
#
word = words[i]
if "NP" in tag or "CD" in tag:
tag = "#"+tag+"#"
new_sent.append(tag)
if "NP" in tag:
rep = glove_lookup.get(word)
if rep is not None:
reps.append(rep)
else:
new_sent.append(word)
if not reps:
dim = len(next(iter(glove_lookup.values())))
reps.append( utils.randvec(n=dim))
return {"sentence": new_sent, "reps":reps}
def terse_featurizer(kbt, corpus, feature_counter):
directional = True
subject_object_suffix = ""
object_subject_suffix = ""
if directional:
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
#subject_object_suffix = "so"
#object_subject_suffix = "os"
""""print("suffix")
print(subject_object_suffix)
print(object_subject_suffix)"""
#attributes_to_consider = ["left_POS","middle_POS","right_POS"]
attributes_to_consider = ["middle_POS"]
so_embeddings = []
os_embeddings = []
embeddings_dict = defaultdict(list)
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for attribute_name in attributes_to_consider:
value = getattr(ex, attribute_name)
terse_sentence = get_terse_sentence(value)
sentence = terse_sentence["sentence"]
embeddings = terse_sentence["reps"]
for embedding in embeddings:
embeddings_dict[subject_object_suffix].append(embedding)
for word in sentence:
feature_counter[word+subject_object_suffix] += 1
#feature_counter[word+subject_object_suffix] = 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for attribute_name in attributes_to_consider:
value = getattr(ex, attribute_name)
terse_sentence = get_terse_sentence(value)
sentence = terse_sentence["sentence"]
embeddings = terse_sentence["reps"]
for embedding in embeddings:
embeddings_dict[object_subject_suffix].append(embedding)
for word in sentence:
feature_counter[word+object_subject_suffix] += 1
#feature_counter[word+object_subject_suffix] = 1
np_funcs=[np.sum, np.min, np.max]
np_funcs = [np.sum]
uuid=""
for key, embeddings in embeddings_dict.items():
func_reps = []
dict_key = uuid+key
for np_func in np_funcs:
some_val = np_func(embeddings, axis=0)
func_key = dict_key+str(np_func)
for idx, val in enumerate(some_val):
mega_key = func_key+str(idx)
feature_counter[mega_key] = val
#pdb.set_trace()
return feature_counter
def model_factory_logistic():
print("LogisticRegression")
return LogisticRegression(fit_intercept=True, solver='liblinear')
def model_factory_LinearSVC():
print("model_factory_LinearSVC")
return LinearSVC(dual=False)
def model_factory_boosting():
print("model_factory_boosting")
return AdaBoostClassifier()
#model_factory_logistic = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
#model_factory_svc = lambda: SVC(kernel='linear')
#model_factory_boosting = lambda: AdaBoostClassifier()
#model_factory_boosting = lambda: AdaBoostClassifier()
#factories = [model_factory_logistic]
factories = [model_factory_LinearSVC]
#factories = [model_factory_boosting]
#directional_bag_of_words_featurizer
featurizers=[terse_featurizer]
#featurizers=[glove_featurizer]
#factories = [model_factory_logistic, model_factory_LinearSVC]
for model_factory in factories:
#print(model_factory)
print(featurizers)
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
vectorize=False,
verbose=True)
vectorizer = results['vectorizer']
if vectorizer:
feature_names = vectorizer.get_feature_names()
feature_counts = len(feature_names)
print("feature_counts " +str(feature_counts))
#print(feature_names)
print("feature_counts " +str(feature_counts))
print("Done!")
else:
print("this model used default vectorizer hence can't retrieve feature counts" )
print("Done all!")
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your code in the scope of the above conditional.
##### YOUR CODE HERE
# Just the dataset needs to be changed
"""for model_factory in factories:
#print(model_factory)
print(featurizers)
results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
vectorize=False,
verbose=True)
vectorizer = results['vectorizer']
if vectorizer:
feature_names = vectorizer.get_feature_names()
feature_counts = len(feature_names)
print("feature_counts " +str(feature_counts))
#print(feature_names)
print("feature_counts " +str(feature_counts))
print("Done!")
else:
print("this model used default vectorizer hence can't retrieve feature counts" )
print("Done all!") """
featurizers=[terse_featurizer]
print(featurizers)
model_factory = model_factory_LinearSVC
bakeoff_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=False) # We don't care about this eval, so skip its summary.
rel_ext_data_home_test = os.path.join(
rel_ext_data_home, 'bakeoff-rel_ext-test-data')
rel_ext.bake_off_experiment(bakeoff_results, rel_ext_data_home_test)
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
if 'IS_GRADESCOPE_ENV' not in os.environ:
pass
# Please enter your score in the scope of the above conditional.
##### YOUR CODE HERE
0.503
###Output
_____no_output_____
###Markdown
Homework and bake-off: Relation extraction using distant supervision
###Code
__author__ = "Bill MacCartney and Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [Baseline](Baseline)1. [Homework questions](Homework-questions) 1. [Different model factory [1 points]](Different-model-factory-[1-points]) 1. [Directional unigram features [1.5 points]](Directional-unigram-features-[1.5-points]) 1. [The part-of-speech tags of the "middle" words [1.5 points]](The-part-of-speech-tags-of-the-"middle"-words-[1.5-points]) 1. [Bag of Synsets [2 points]](Bag-of-Synsets-[2-points]) 1. [Your original system [3 points]](Your-original-system-[3-points])1. [Bake-off [1 point]](Bake-off-[1-point]) OverviewThis homework and associated bake-off are devoted to the developing really effective relation extraction systems using distant supervision. As with the previous assignments, this notebook first establishes a baseline system. The initial homework questions ask you to create additional baselines and suggest areas for innovation, and the final homework question asks you to develop an original system for you to enter into the bake-off. Set-upSee [the first notebook in this unit](rel_ext_01_task.ipynbSet-up) for set-up instructions.
###Code
import os
import rel_ext
from collections import Counter
import itertools
from sklearn.linear_model import LogisticRegression
from nltk.corpus import wordnet as wn
###Output
_____no_output_____
###Markdown
As usual, we unite our corpus and KB into a dataset, and create some splits for experimentation:
###Code
rel_ext_data_home = os.path.join('data', 'rel_ext_data')
corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz'))
kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz'))
dataset = rel_ext.Dataset(corpus, kb)
###Output
_____no_output_____
###Markdown
You are not wedded to this set-up for splits. The bake-off will be conducted on a previously unseen test-set, so all of the data in `dataset` is fair game:
###Code
splits = dataset.build_splits(
split_names=['tiny', 'train', 'dev'],
split_fracs=[0.01, 0.79, 0.20],
seed=1)
splits
###Output
_____no_output_____
###Markdown
Baseline
###Code
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word] += 1
return feature_counter
featurizers = [simple_bag_of_words_featurizer]
model_factory = lambda: LogisticRegression(fit_intercept=True, solver='liblinear')
baseline_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=model_factory,
verbose=True)
###Output
/Users/colin/.pyenv/versions/3.7.4/envs/374/lib/python3.7/site-packages/sklearn/svm/base.py:929: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
###Markdown
Studying model weights might yield insights:
###Code
rel_ext.examine_model_weights(baseline_results)
###Output
Highest and lowest feature weights for relation adjoins:
2.533 Córdoba
2.482 Valais
2.448 Taluks
..... .....
-1.162 Russia
-1.192 Afghanistan
-1.429 Europe
Highest and lowest feature weights for relation author:
2.776 author
2.225 essay
2.218 by
..... .....
-2.921 17th
-5.600 dystopian
-5.911 1865
Highest and lowest feature weights for relation capital:
2.805 capital
1.860 km
1.749 posted
..... .....
-1.763 ’
-1.770 pop
-1.776 million
Highest and lowest feature weights for relation contains:
2.944 third-largest
2.328 bordered
2.267 district
..... .....
-2.242 film
-2.804 Mile
-4.263 Antrim
Highest and lowest feature weights for relation film_performance:
4.055 starring
3.748 opposite
3.354 alongside
..... .....
-2.072 Iruvar
-2.085 Tamil
-2.147 then
Highest and lowest feature weights for relation founders:
4.047 founder
3.834 founded
3.599 co-founder
..... .....
-1.585 series
-1.594 novel
-1.682 philosopher
Highest and lowest feature weights for relation genre:
2.829 album
2.737 game
2.712 series
..... .....
-1.390 ;
-1.440 and
-1.915 at
Highest and lowest feature weights for relation has_sibling:
4.914 brother
3.930 sister
2.988 nephew
..... .....
-1.569 singer-songwriter
-1.614 II
-1.737 Her
Highest and lowest feature weights for relation has_spouse:
5.228 wife
4.501 widow
4.282 husband
..... .....
-1.486 Tyndareus
-1.569 children
-1.750 4
Highest and lowest feature weights for relation is_a:
3.126
2.496 family
2.461 philosopher
..... .....
-1.812 emperor
-2.754 hibiscus
-3.772 widespread
Highest and lowest feature weights for relation nationality:
2.703 born
2.036 president
1.871 caliph
..... .....
-1.325 U.S.
-1.334 or
-1.484 American
Highest and lowest feature weights for relation parents:
4.990 son
4.276 father
4.269 daughter
..... .....
-1.473 Tyndareus
-1.620 defeated
-2.937 Indian
Highest and lowest feature weights for relation place_of_birth:
3.878 born
2.833 birthplace
2.118 mayor
..... .....
-1.369 or
-1.438 and
-1.771 Indian
Highest and lowest feature weights for relation place_of_death:
2.252 died
1.858 rebuilt
1.827 Germany
..... .....
-1.219 and
-1.258 Museum
-1.364 Siege
Highest and lowest feature weights for relation profession:
3.640
2.423 philosopher
2.278 feminist
..... .....
-1.382 at
-1.388 emperor
-2.108 on
Highest and lowest feature weights for relation worked_at:
3.379 professor
3.190 president
2.948 head
..... .....
-1.196 parent
-1.269 part
-1.647 or
###Markdown
Homework questionsPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.) Different model factory [1 points]The code in `rel_ext` makes it very easy to experiment with other classifier models: one need only redefine the `model_factory` argument. This question asks you to assess a [Support Vector Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).__To submit:__ A call to `rel_ext.experiment` training on the 'train' part of `splits` and assessing on its `dev` part, with `featurizers` as defined above in this notebook and the `model_factory` set to one based in an `SVC` with `kernel='linear'` and all other arguments left with default values.
###Code
##### YOUR CODE HERE
my_model_factory = lambda: LogisticRegression(fit_intercept=False, solver='liblinear')
new_model_results = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=featurizers,
model_factory=my_model_factory,
verbose=True)
###Output
relation precision recall f-score support size
------------------ --------- --------- --------- --------- ---------
adjoins 0.133 0.315 0.150 340 5716
author 0.673 0.583 0.653 509 5885
capital 0.208 0.347 0.226 95 5471
contains 0.753 0.628 0.724 3904 9280
film_performance 0.671 0.646 0.666 766 6142
founders 0.530 0.466 0.516 380 5756
genre 0.447 0.347 0.423 170 5546
has_sibling 0.514 0.216 0.403 499 5875
has_spouse 0.349 0.456 0.366 594 5970
is_a 0.505 0.278 0.434 497 5873
nationality 0.416 0.306 0.388 301 5677
parents 0.738 0.532 0.685 312 5688
place_of_birth 0.367 0.236 0.330 233 5609
place_of_death 0.196 0.132 0.179 159 5535
profession 0.474 0.263 0.409 247 5623
worked_at 0.401 0.335 0.386 242 5618
------------------ --------- --------- --------- --------- ---------
macro-average 0.461 0.380 0.434 9248 95264
###Markdown
Directional unigram features [1.5 points]The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between _X and his son Y_ and _Y and his son X_. This question asks you to modify `simple_bag_of_words_featurizer` to capture these differences. __To submit:__1. A feature function `directional_bag_of_words_featurizer` that is just like `simple_bag_of_words_featurizer` except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The included function `test_directional_bag_of_words_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `directional_bag_of_words_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)3. `rel_ext.experiment` returns some of the core objects used in the experiment. How many feature names does the `vectorizer` have for the experiment run in the previous step? Include the code needed for getting this value. (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it!)
###Code
def directional_bag_of_words_featurizer(kbt, corpus, feature_counter):
# Append these to the end of the keys you add/access in
# `feature_counter` to distinguish the two orders. You'll
# need to use exactly these strings in order to pass
# `test_directional_bag_of_words_featurizer`.
subject_object_suffix = "_SO"
object_subject_suffix = "_OS"
##### YOUR CODE HERE
for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
for word in ex.middle.split(' '):
feature_counter[word + subject_object_suffix] += 1
for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
for word in ex.middle.split(' '):
feature_counter[word + object_subject_suffix] += 1
return feature_counter
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
_ = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[directional_bag_of_words_featurizer],
model_factory=model_factory,
verbose=True)
def test_directional_bag_of_words_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['is_OS'] += 5
feature_counter = directional_bag_of_words_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'is_OS':6,'a_OS':1,'webcomic_OS':1,'created_OS':1,'by_OS':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
test_directional_bag_of_words_featurizer(splits['all'].corpus)
###Output
_____no_output_____
###Markdown
The part-of-speech tags of the "middle" words [1.5 points]Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on `middle_POS`.__To submit:__1. A feature function `middle_bigram_pos_tag_featurizer` that is just like `simple_bag_of_words_featurizer` except that it creates a feature for bigram POS sequences. For example, given `The/DT dog/N napped/V` we obtain the list of bigram POS sequences `b = [' DT', 'DT N', 'N V', 'V ']`. Of course, `middle_bigram_pos_tag_featurizer` should return count dictionaries defined in terms of such bigram POS lists, on the model of `simple_bag_of_words_featurizer`. Don't forget the start and end tags, to model those environments properly! The included function `test_middle_bigram_pos_tag_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `middle_bigram_pos_tag_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment` as exemplified above in this notebook.)
###Code
def get_all_examples(corpus, kbt):
return itertools.chain(
corpus.get_examples_for_entities(kbt.sbj, kbt.obj),
corpus.get_examples_for_entities(kbt.obj, kbt.sbj)
)
def middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter):
##### YOUR CODE HERE
for ex in get_all_examples(corpus, kbt):
for bigram in get_tag_bigrams(ex.middle_POS):
feature_counter[bigram] += 1
return feature_counter
def get_tag_bigrams(s):
"""Suggested helper method for `middle_bigram_pos_tag_featurizer`.
This should be defined so that it returns a list of str, where each
element is a POS bigram."""
# The values of `start_symbol` and `end_symbol` are defined
# here so that you can use `test_middle_bigram_pos_tag_featurizer`.
start_symbol = "<s>"
end_symbol = "</s>"
##### YOUR CODE HERE
pos_tags = [start_symbol] + get_tags(s) + [end_symbol]
return [' '.join(bigram) for bigram in ngrams(pos_tags, 2)]
def get_tags(s):
"""Given a sequence of word/POS elements (lemmas), this function
returns a list containing just the POS elements, in order.
"""
return [parse_lem(lem)[1] for lem in s.strip().split(' ') if lem]
def parse_lem(lem):
"""Helper method for parsing word/POS elements. It just splits
on the rightmost / and returns (word, POS) as a tuple of str."""
return lem.strip().rsplit('/', 1)
def ngrams(tokens, n):
for i in range(0, len(tokens) - n + 1):
yield tokens[i:i+n]
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
_ = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[middle_bigram_pos_tag_featurizer],
model_factory=model_factory,
verbose=True)
def test_middle_bigram_pos_tag_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter['<s> VBZ'] += 5
feature_counter = middle_bigram_pos_tag_featurizer(kbt, corpus, feature_counter)
expected = defaultdict(
int, {'<s> VBZ':6,'VBZ DT':1,'DT JJ':1,'JJ VBN':1,'VBN IN':1,'IN </s>':1})
assert feature_counter == expected, \
"Expected:\n{}\nGot:\n{}".format(expected, feature_counter)
test_middle_bigram_pos_tag_featurizer(splits['all'].corpus)
###Output
_____no_output_____
###Markdown
Bag of Synsets [2 points]The following allows you to use NLTK's WordNet API to get the synsets compatible with _dog_ as used as a noun:```from nltk.corpus import wordnet as wndog = wn.synsets('dog', pos='n')dog[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]```This question asks you to create synset-based features from the word/tag pairs in `middle_POS`.__To submit:__1. A feature function `synset_featurizer` that is just like `simple_bag_of_words_featurizer` except that it returns a list of synsets derived from `middle_POS`. Stringify these objects with `str` so that they can be `dict` keys. Use `convert_tag` (included below) to convert tags to `pos` arguments usable by `wn.synsets`. The included function `test_synset_featurizer` should help verify that you've done this correctly.2. A call to `rel_ext.experiment` with `synset_featurizer` as the only featurizer. (Aside from this, use all the default values for `rel_ext.experiment`.)
###Code
def synset_featurizer(kbt, corpus, feature_counter):
for example in get_all_examples(corpus, kbt):
for synset in get_synsets(example.middle_POS):
feature_counter[synset] += 1
return feature_counter
def get_synsets(s):
"""Suggested helper method for `synset_featurizer`. This should
be completed so that it returns a list of stringified Synsets
associated with elements of `s`.
"""
# Use `parse_lem` from the previous question to get a list of
# (word, POS) pairs. Remember to convert the POS strings.
wt = [parse_lem(lem) for lem in s.strip().split(' ') if lem]
##### YOUR CODE HERE
synset_names = []
for (token, pos_tag) in wt:
wn_pos_tag = convert_tag(pos_tag)
for synset in wn.synsets(token, wn_pos_tag):
synset_names.append(repr(synset))
return synset_names
def convert_tag(t):
"""Converts tags so that they can be used by WordNet:
| Tag begins with | WordNet tag |
|-----------------|-------------|
| `N` | `n` |
| `V` | `v` |
| `J` | `a` |
| `R` | `r` |
| Otherwise | `None` |
"""
if t[0].lower() in {'n', 'v', 'r'}:
return t[0].lower()
elif t[0].lower() == 'J':
return 'a'
else:
return None
# Call to `rel_ext.experiment`:
##### YOUR CODE HERE
_ = rel_ext.experiment(
splits,
train_split='train',
test_split='dev',
featurizers=[synset_featurizer],
model_factory=model_factory,
verbose=True)
def test_synset_featurizer(corpus):
from collections import defaultdict
kbt = rel_ext.KBTriple(rel='worked_at', sbj='Randall_Munroe', obj='xkcd')
feature_counter = defaultdict(int)
# Make sure `feature_counter` is being updated, not reinitialized:
feature_counter["Synset('be.v.01')"] += 5
feature_counter = synset_featurizer(kbt, corpus, feature_counter)
# The full return values for this tend to be long, so we just
# test a few examples to avoid cluttering up this notebook.
test_cases = {
"Synset('be.v.01')": 6,
"Synset('embody.v.02')": 1
}
for ss, expected in test_cases.items():
result = feature_counter[ss]
assert result == expected, \
"Incorrect count for {}: Expected {}; Got {}".format(ss, expected, result)
test_synset_featurizer(splits['all'].corpus)
###Output
_____no_output_____
###Markdown
Your original system [3 points]There are many options, and this could easily grow into a project. Here are a few ideas:- Try out different classifier models, from `sklearn` and elsewhere.- Add a feature that indicates the length of the middle.- Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).- Introduce features based on the entity mentions themselves. - Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.- Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The [NLTK](https://www.nltk.org/) toolkit contains a variety of [parsing algorithms](http://www.nltk.org/api/nltk.parse.html) that may help.- The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as [GloVe](https://nlp.stanford.edu/projects/glove/)?In the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.
###Code
# Enter your system description in this cell.
# Please do not remove this comment.
###Output
_____no_output_____
###Markdown
Bake-off [1 point]For the bake-off, we will release a test set. The announcement will go out on the discussion forum. You will evaluate your custom model from the previous question on these new datasets using the function `rel_ext.bake_off_experiment`. Rules:1. Only one evaluation is permitted.1. No additional system tuning is permitted once the bake-off has started.The cells below this one constitute your bake-off entry.People who enter will receive the additional homework point, and people whose systems achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.The announcement will include the details on where to submit your entry.
###Code
# Enter your bake-off assessment code in this cell.
# Please do not remove this comment.
##### YOUR CODE HERE
# On an otherwise blank line in this cell, please enter
# your macro-average f-score (an F_0.5 score) as reported
# by the code above. Please enter only a number between
# 0 and 1 inclusive. Please do not remove this comment.
###Output
_____no_output_____ |
nlp_with_python_for_ml/Exercise Files/Ch01/01_03/Start/01_03.ipynb | ###Markdown
NLP Basics: Reading in text data & why do we need to clean the text? Read in semi-structured text data
###Code
# Read in the raw text
rawData = open("SMSSpamCollection.tsv").read()
# Print the raw data
rawData[0:500]
parsedData = rawData.replace('\t', '\n').split('\n')
parsedData[:5]
labelList = parsedData[0::2]
textList = parsedData[1::2]
labelList[:5]
textList[:5]
import pandas as pd
print(len(labelList))
print(len(textList))
print(labelList[-5:])
fullCorpus = pd.DataFrame({
'label': labelList[:-1],
'body_list': textList
})
fullCorpus.head()
dataset = pd.read_csv("SMSSpamCollection.tsv", sep='\t', header=None)
dataset
###Output
_____no_output_____ |
COVID FINAL.ipynb | ###Markdown
Visualizing Covid19 data for Iceland in 2020 data source The data is from [European Centre for Disease Prevention and Control](https://ia241-bullard.notebook.us-east-1.sagemaker.aws/notebooks/IA241-Gituhub/COVID%20FINAL.ipynb:~:text=European%20Centre%20for%20Disease%20Prevention%20and%20Control) 
###Code
%matplotlib inline
import pandas
df = pandas.read_excel('s3://ia241-bullard/covid_data.xls')
df[:10]
###Output
_____no_output_____
###Markdown
Data On Iceland
###Code
iceland_data = df.loc[ df['countriesAndTerritories']=='Iceland' ]
iceland_data[:10]
###Output
_____no_output_____
###Markdown
Case count across months
###Code
cases_per_day = iceland_data.groupby('month').sum()['cases']
print(cases_per_day)
cases_per_day.plot()
cases=iceland_data.sum()['cases']
print('Iceland had {} cases in 2020.'.format(cases))
###Output
Iceland had 5557 cases in 2020.
###Markdown
Iceland had their first case in February. Two distinct rising trends are shown. First rise was in March peaking at 1085 cases. Second rise began in September with 590 and peaked in October with 2102. Over the course of 2020 Iceland had 5557 total cases. Data on Covid deaths
###Code
iceland_death=iceland_data.sum()['deaths']
print('Iceland had {} deaths in 2020.'.format(iceland_death))
iceland_data.plot.scatter(x='cases',y='deaths',c='month')
###Output
_____no_output_____
###Markdown
A seemingly lackluster scatter plot due to Iceland's relative low case to death ratio. With only 28 deaths across 2020 Iceland had a mortality rate of 0.005. Continent Data
###Code
europe_data= df.loc[ df['continentExp']=='Europe' ]
e_cases=europe_data.sum()['cases']
e_deaths=europe_data.sum()['deaths']
print('In 2020 Europe had {} cases and {} deaths.'.format(e_cases,e_deaths))
sum_deaths_per_continent=df.groupby('continentExp').sum()['deaths']
sum_deaths_per_continent.nlargest(10).plot.bar()
###Output
_____no_output_____ |
Using music21 corpus examples-interactiveImage.ipynb | ###Markdown
music21: A Toolkit for Comupter-Aided Musicology Some examples to test basic music21 corpus functionalities This is a Jupyter notebook created by [@musicenfanthen](https://github.com/musicEnfanthen) and [@aWilsonandmore](https://github.com/aWilsonandmore) to work with some basic functionalities of music21 (http://web.mit.edu/music21/). For more information on Jupyter notebooks go to http://jupyter.org/. To execute a block of code in this notebook, click in the cell and press `Shift+Enter`.To get help on any music21 routine, click on it and press `Shift+Tab`. Imports and setup To use music21 in this notebook and python, you have to import all (\*) routines from music21 at first with the following command.You’ll probably get a few warnings that you’re missing some optional modules. That’s okay. If you get a warning that “no module named music21” then something probably went wrong above.
###Code
%matplotlib inline
# imports the matplot library to plot graphs etc.
from music21 import *
###Output
_____no_output_____
###Markdown
Probably you have to set manually the correct file path to an Application that is able to open MusicXML files (like MuseScore). To do so, you can use the `music21.environment` module where you can set an `musicxmlPath` key.Make sure to change below the string `path/to/your/musicXmlApplication` with the correct file path (keep the quotation marks):- on Mac e.g.: `/Applications/MuseScore 2.app/Contents/MacOS/mscore` - or on Windows e.g.: `C:/Program Files (x86)/MuseScore 2/bin/MuseScore.exe`and uncomment the line (remove the `` at the begin of the line).In the same way, you can also add a path to your lilypond installation, using`env['lilypondPath']`:- on Mac e.g.: `Applications/Lilypond.app`- on Windows e.g.: `C:/Program Files (x86)/LilyPond/usr/bin/lilypond.exe`
###Code
# definition of environment settings is different from the settings
# when this jupyter notebook runs locally on your machine.
# Changes are necessary because jupyter notebook is running via Binder image
env = environment.Environment()
env['lilypondPath']='/usr/bin/lilypond'
env['musescoreDirectPNGPath'] = '/usr/bin/musescore'
env['musicxmlPath'] = '/usr/bin/musescore'
environment.set('pdfPath', '/usr/bin/musescore')
environment.set('graphicsPath', '/usr/bin/musescore')
print('Environment settings:')
print('musicXML: ', env['musicxmlPath'])
print('musescore: ', env['musescoreDirectPNGPath'])
print('lilypond: ', env['lilypondPath'])
###Output
_____no_output_____
###Markdown
Using jupyter notebook inside a Binder image causes some issues with music21's ".show()"-method (see: https://github.com/cuthbertLab/music21/issues/260). Thanks to Tony Hirst (@psychemedia) there is a small workaround with a redefinition of the method:
###Code
# re-definition of sho()-method ---> "HACK" from https://github.com/psychemedia/showntell/blob/music/index_music.ipynb
# see also this music21 issue: https://github.com/cuthbertLab/music21/issues/260
%load_ext music21.ipython21
from IPython.display import Image
def render(s):
s.show('lily.png')
return Image(filename=s.write('lily.png'))
###Output
_____no_output_____
###Markdown
Starting with corpus examples List of works found in the music21 corpus: http://web.mit.edu/music21/doc/about/referenceCorpus.htmldemonstration-filesmusic21's corpus module: http://web.mit.edu/music21/doc/moduleReference/moduleCorpus.html
###Code
demoPaths = corpus.getComposer('demos')
demoPaths
demoPath = demoPaths[0]
demo = corpus.parse(demoPath)
print(demo.corpusFilepath)
#demo.show()
render(demo)
###Output
_____no_output_____
###Markdown
There is a bunch of metadata bundles in the music21 corpus. You can make use of it via the corpus search:
###Code
sbBundle = corpus.search('Bach', 'composer')
print(sbBundle)
print(sbBundle[0])
print(sbBundle[0].sourcePath)
sbBundle[0].metadata.all()
###Output
_____no_output_____
###Markdown
Loading files & formats from corpus or diskIt is also possible to load and parse various file formats directly into music21. "In general, to load a file from disk, call music21.converter.parse(), which can handle importing all supported formats. (For complete documentation on file and data formats, see http://web.mit.edu/music21/doc/moduleReference/moduleConverter.htmlmoduleconverter" (UsersGuide 08).The following example takes a musicXML-File as input. A list of possible file formats can be found here: http://web.mit.edu/music21/doc/usersGuide/usersGuide_08_installingMusicXML.html
###Code
s = corpus.parse('bach/bwv65.2.xml')
# s.show()
render(s)
###Output
_____no_output_____
###Markdown
Score manipulationWith music21 it is pretty straightforward to manipulate different parts/voices of a score. In the following example, we take the four voice chorale setting from above and transform it into a two-part piano setting.
###Code
fVoices = stream.Part((s.parts['Soprano'], s.parts['Alto'])).chordify()
mVoices = stream.Part((s.parts['Tenor'], s.parts['Bass'])).chordify()
chorale2p = stream.Score((fVoices, mVoices))
# chorale2p.show()
render(chorale2p)
###Output
_____no_output_____
###Markdown
Or what about a more bass-with-accompaniment-style setting?
###Code
upperVoices = stream.Part((s.parts['Soprano'], s.parts['Alto'], s.parts['Tenor'])).chordify()
bass = stream.Part((s.parts['Bass']))
chorale3p = stream.Score((upperVoices, bass))
# chorale3p.show()
render(chorale3p)
###Output
_____no_output_____
###Markdown
To see, how music21 stores this stream internally, use the .`show()`-method:
###Code
chorale3p.show('text')
for c in chorale3p.recurse().getElementsByClass('Chord'):
print(c)
###Output
_____no_output_____
###Markdown
Roman numeral analysismusic21 makes it easy to apply roman numeral analysis to chordified music. To do so, we use the `.chordify()`-method on the chorale above, grep all chords with the `getElementsByClass()`-method, bring them into closed position and apply the roman numeral as lyrics. Additionally we highlight some seventh chords:
###Code
# chordify the chorale
choraleChords = chorale3p.chordify()
for c in choraleChords.recurse().getElementsByClass('Chord'):
# force closed position
c.closedPosition(forceOctave=4, inPlace=True)
# apply roman numerals
rn = roman.romanNumeralFromChord(c, key.Key('A'))
c.addLyric(str(rn.figure))
# highlight dimished seventh chords
if c.isDiminishedSeventh():
c.style.color = 'red'
# highlight dominant seventh chords
if c.isDominantSeventh():
c.style.color = 'blue'
# choraleChords.show()
render(choraleChords)
###Output
_____no_output_____
###Markdown
Another example (plotting)
###Code
p = corpus.parse('bach/bwv846.xml')
# p.show()
render(p)
p.analyze('key')
p.show('text')
len(p.parts)
len(p.flat.notes)
###Output
_____no_output_____
###Markdown
There are some plotting possibilities that come with `matplotlib`:
###Code
graph.findPlot.FORMATS
###Output
_____no_output_____
###Markdown
To plot a stream, just use the `.plot()`-method instead of `-show()`:
###Code
p.plot('pianoroll')
p.plot('horizontalbar')
###Output
_____no_output_____ |
GoodbAI.ipynb | ###Markdown
Creating a Kobe Twitter Botby Richard Luo*Since: August 24th, 2020*Using gpt-2-simple, I refined OpenAI's Generative Pre-trained Transformer 2 (GPT) by training the model on Kobe's Tweets. Using TWINT, I scraped all of Kobe's tweets since he first came onto Twitter until his infamous last tweet about LeBron James.
###Code
%tensorflow_version 1.x
!pip install -q gpt-2-simple
import gpt_2_simple as gpt2
from datetime import datetime
from google.colab import files
###Output
TensorFlow 1.x selected.
Building wheel for gpt-2-simple (setup.py) ... [?25l[?25hdone
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
###Markdown
GPUVerifying Which GPU is in use - Tesla T4 is better than the P100
###Code
!nvidia-smi
###Output
Tue Aug 25 17:10:36 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 36C P8 28W / 149W | 0MiB / 11441MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Downloading GPT-2There are three released sizes of GPT-2:* `124M` (default): the "small" model, 500MB on disk.* `355M`: the "medium" model, 1.5GB on disk.* `774M`: the "large" model, cannot currently be finetuned with Colaboratory but can be used to generate text from the pretrained model (see later in Notebook)* `1558M`: the "extra large", true model. Will not work if a K80 GPU is attached to the notebook. (like `774M`, it cannot be finetuned).Larger models have more knowledge, but take longer to finetune and longer to generate text.
###Code
gpt2.download_gpt2(model_name="124M")
###Output
Fetching checkpoint: 1.05Mit [00:00, 190Mit/s]
Fetching encoder.json: 1.05Mit [00:00, 62.5Mit/s]
Fetching hparams.json: 1.05Mit [00:00, 433Mit/s]
Fetching model.ckpt.data-00000-of-00001: 498Mit [00:06, 74.1Mit/s]
Fetching model.ckpt.index: 1.05Mit [00:00, 440Mit/s]
Fetching model.ckpt.meta: 1.05Mit [00:00, 113Mit/s]
Fetching vocab.bpe: 1.05Mit [00:00, 127Mit/s]
###Markdown
Mounting Google DriveThe best way to get input text to-be-trained into the Colaboratory VM, and to get the trained model *out* of Colaboratory, is to route it through Google Drive *first*.Running this cell (which will only work in Colaboratory) will mount your personal Google Drive in the VM, which later cells can use to get data in/out. (it will ask for an auth code; that auth is not saved anywhere)
###Code
gpt2.mount_gdrive()
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Uploading a Text File to be Trained to ColaboratoryIn the Colaboratory Notebook sidebar on the left of the screen, select *Files*. From there you can upload files:Upload **any smaller text file** (<10 MB) and update the file name in the cell below, then run the cell.
###Code
file_name = "kobe.txt"
###Output
_____no_output_____
###Markdown
If your text file is larger than 10MB, it is recommended to upload that file to Google Drive first, then copy that file from Google Drive to the Colaboratory VM.
###Code
gpt2.copy_file_from_gdrive(file_name)
###Output
_____no_output_____
###Markdown
Finetune GPT-2The next cell will start the actual finetuning of GPT-2. It creates a persistent TensorFlow session which stores the training config, then runs the training for the specified number of `steps`. (to have the finetuning run indefinitely, set `steps = -1`)The model checkpoints will be saved in `/checkpoint/run1` by default. The checkpoints are saved every 500 steps (can be changed) and when the cell is stopped.The training might time out after 4ish hours; make sure you end training and save the results so you don't lose them!**IMPORTANT NOTE:** If you want to rerun this cell, **restart the VM first** (Runtime -> Restart Runtime). You will need to rerun imports but not recopy files.Other optional-but-helpful parameters for `gpt2.finetune`:* **`restore_from`**: Set to `fresh` to start training from the base GPT-2, or set to `latest` to restart training from an existing checkpoint.* **`sample_every`**: Number of steps to print example output* **`print_every`**: Number of steps to print training progress.* **`learning_rate`**: Learning rate for the training. (default `1e-4`, can lower to `1e-5` if you have <1MB input data)* **`run_name`**: subfolder within `checkpoint` to save the model. This is useful if you want to work with multiple models (will also need to specify `run_name` when loading the model)* **`overwrite`**: Set to `True` if you want to continue finetuning an existing model (w/ `restore_from='latest'`) without creating duplicate copies.
###Code
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
dataset=file_name,
model_name='124M',
steps=1000,
restore_from='latest',
run_name='run1',
print_every=10,
sample_every=200,
save_every=500,
overwrite=True
)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/gpt_2_simple/src/sample.py:17: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Loading checkpoint checkpoint/run1/model-706
INFO:tensorflow:Restoring parameters from checkpoint/run1/model-706
###Markdown
After the model is trained, you can copy the checkpoint folder to your own Google Drive.If you want to download it to your personal computer, it's strongly recommended you copy it there first, then download from Google Drive. The checkpoint folder is copied as a `.rar` compressed file; you can download it and uncompress it locally.
###Code
gpt2.copy_checkpoint_to_gdrive(run_name='run1')
###Output
_____no_output_____
###Markdown
You're done! Feel free to go to the **Generate Text From The Trained Model** section to generate text based on your retrained model. Load a Trained Model CheckpointRunning the next cell will copy the `.rar` checkpoint file from your Google Drive into the Colaboratory VM.
###Code
gpt2.copy_checkpoint_from_gdrive(run_name='run1')
###Output
_____no_output_____
###Markdown
The next cell will allow you to load the retrained model checkpoint + metadata necessary to generate text.**IMPORTANT NOTE:** If you want to rerun this cell, **restart the VM first** (Runtime -> Restart Runtime). You will need to rerun imports but not recopy files.
###Code
sess = gpt2.start_tf_sess()
gpt2.load_gpt2(sess, run_name='run1')
###Output
Loading checkpoint checkpoint/run1/model-848
INFO:tensorflow:Restoring parameters from checkpoint/run1/model-848
###Markdown
Generate Text From The Trained ModelAfter you've trained the model or loaded a retrained model from checkpoint, you can now generate text. `generate` generates a single text from the loaded model.
###Code
gpt2.generate(sess, run_name='run1')
###Output
@PhilJackson11 I'm thinking of writing a novel about retiring from basketball and founding a company… http://instagram.com/p/l6yTNrUVjd/
@PhilJackson11 I've been thinking about writing a novel for a while now but didnt know I was writing it lol I'm not a bad kid hey are you ready to play yet?
Thinking about writing a novelisation of #ThePunies #GameSetTellerforyou #TeamUSA #LakerNation
A #ThankYouMLK50 march is about more than just seeing shoes fitted. It's about being represented in the highest level. It's about being a part of the change. Change is coming. Watch Episode 4 here: http://es.pn/2J1XR7H
Lesson 1: Love the hate. #HeroVillain #KOBE11
I love the trash talk. #ambien #mambaout
This is why I partner with Lenovo #LegacyandtheQueen #Kobe11
#TMT: http://www.youtube.com/watch?v=t8ZtgNuZDM ” by far the craziest thing I've done! I have no idea how I walked after that #real***t
@PhilJackson11 ist really only funny 8to9 years but it will get better. I'm thinking 9.5 to 12
#Dekel#ThisIsNOW
#HeroVillain #KOBE11
Hahaha #simpleminds
@SteveBlake5 #muse
@tolanialli good luck. Enjoy life after the game #Kobe11
#tuccinamos
@tuccinamos It's a changing of the guard is a great feeling
@tuccinamos it is. @tuccinamos is where I get my inspiration
“@tuccinamos: .@KyrieIrving you're a competitor too!” I love the look of his (Irving's) new Nike shoe. Check it here: http://i.instagram.com/p/Bo9fxN6RNq/
“@RebelWilson: I've always wanted a Rosie Odom!” I love her film and would gladly take her on the show.
Rosie is a realist her class is amazing!
I've cried foul play no more! #onepaintmore #onemoregame
@RebelWilson No prob. I will check it out tomorrow #onemoregame #differentanimalsamebeast
“@ZareenAysha: What an honor to present Mark Parker w/ the Golden Boy to such an incredible generation.” #51cagevino
Welcome to the new era of sports drinks. #Switch2Natural #alexbend ⚽� http://instagram.com/p/BoNlqxNm/
Thank you @EponymousMusic for helping bring to life the word #nikevino #GoldenBoy
@kobebryant Thanks @Gatorade, @DrinkBODYARMOR will take it from here.
#nikevino means exactly what it says on the tin. Drink BODYARMOR. #vino
Drink BODYARMOR. #vino #respect
Respect. https://twitter.com/espn/status/291005426626080768/photo/1 pic.twitter.com/raajqphs ” #nikevino
@Isaiah_Thomas not cool. Just heard good things about you both :-)
Thanks @WashMystics thank you fam! You guys are truly "the" reason my brotha come back stronger. #mysticalbody #mambasrep
Don't demand equality if you are a failure. Find a way to win back the love of a chippy game with consistently changing body types #lovemygame
#GoldenBoy inspired, but honestly, what could go wrong? IKEA? #jewellnumberboliviana #musecagevino https://twitter.com/jewellnumberbolivia/status/290856086234341377 …
@jerrycferrara hell no!
Play like a champion https://twitter.com/goghadvd/status/29100542662603936/photo/1 pic.twitter.com/raajqphs
Nah. Just see how far I've come as a competitor. Not a fan, justifications for my unwillingness to play. Different teams and leagues
Not a chance
Great thing Stephane doubles team with me! Amazing goal to win back the champs momentum
#musecage https://twitter.com/goghad
###Markdown
If you're creating an API based on your model and need to pass the generated text elsewhere, you can do `text = gpt2.generate(sess, return_as_list=True)[0]`You can also pass in a `prefix` to the generate function to force the text to start with a given character sequence and generate text from there (good if you add an indicator when the text starts).You can also generate multiple texts at a time by specifing `nsamples`. Unique to GPT-2, you can pass a `batch_size` to generate multiple samples in parallel, giving a massive speedup (in Colaboratory, set a maximum of 20 for `batch_size`).Other optional-but-helpful parameters for `gpt2.generate` and friends:* **`length`**: Number of tokens to generate (default 1023, the maximum)* **`temperature`**: The higher the temperature, the crazier the text (default 0.7, recommended to keep between 0.7 and 1.0)* **`top_k`**: Limits the generated guesses to the top *k* guesses (default 0 which disables the behavior; if the generated output is super crazy, you may want to set `top_k=40`)* **`top_p`**: Nucleus sampling: limits the generated guesses to a cumulative probability. (gets good results on a dataset with `top_p=0.9`)* **`truncate`**: Truncates the input text until a given sequence, excluding that sequence (e.g. if `truncate=''`, the returned text will include everything before the first ``). It may be useful to combine this with a smaller `length` if the input texts are short.* **`include_prefix`**: If using `truncate` and `include_prefix=False`, the specified `prefix` will not be included in the returned text.
###Code
gpt2.generate(sess,
length=250,
temperature=0.7,
prefix="@KingJames",
nsamples=5,
batch_size=5
)
###Output
@KingJames #countonkobe
Watching @TurnerSportsEJ and the crew do this #allstarchallenge is hilarious Not a good case for nba players being the best athletes! HA
#QueenMamba ladyvb24 Celebrate the one you love #myvalentine happy valentines day to all #blessings http://instagram.com/p/kbG3a8RNqV/
Happy birthday Mr Russell. Thank you for all of your wisdom and the amount of time you have taken to… http://instagram.com/p/kWHezyRNlF/
On my Nike set with the champ rsherman_25 #differentanimalSamebeast http://instagram.com/p/kQdrA8xNiv/
Major shout to @sagekotsenburg @Jme_Anderson #snowboardGold #usa #SochiStomped
Tonight, watch @BillClinton, @AllysonFelix, @TheRealMattKemp & myself discuss why kids in sports is so important. http://es.pn/KidsAndSports
What a game! #seriously
====================
@KingJames #GOAT
Catch the short film I Just Metagy feat. Demi Lovato and Dina is dancing! It’s Disney’s masterpiece and one of my favorite videos. https://youtu.be/oVK6YReKeDM ” ( http://bit.ly/60FZrJqU
🐍😎 https://twitter.com/dwyanewade/status/600149450074513408 …
Aquaman: God No More! http://www.nike.com/kobe … pic.twitter.com/Hn8ueuIQNV
The film is directed by Paul Jenkins and is represented by Weta/Poly/Annie Wojcicki and is available Nov. 17 wherever you buy or listen to books. Click the link below to pre-order! http://go90.show/YHVOI pic.twitter.com/qx9TXpyo8
I’m honored to be in the position to inspire the next generation and honored to share my vision. Thank you @theellenshow #MagiciansWipeOut pic.twitter.com
====================
@KingJames @DrinkBODYARMOR #armorUp
@7thChamberPjacs @8thPjacs @8thRE. Check out the newest addition to the @DrinkBODYARMOR family on @ESPN+ http://www.espn.com/video/clip?id=19440196 …
🙌🏾 https://twitter.com/espnW/status/119177880612771128 …
🙏🏾 https://twitter.com/espn.com/status/119168757968948096 …
When the world stops, there’s only one option - #ChooseGo #Nike pic.twitter.com/JKJsrBpbBU
Cheryl Boone Isaacs 🙌🏾 #ICONMANN https://twitter.com/kelleylcarter/status/119009775♂s @kobebryant♂d should be in the game in the first place. I think he can really play inside or outside or anywhere in between.
@SarahRobbOh got it ;)
@SarahRobb
====================
@KingJames #CountOnIt #ComingSoon https://twitter.com/CountOnIt/status/861039463377831284 …
🙏🏾 https://twitter.com/JeanieBuss/status/86103564540 Future of the Lance and Champ Athletes of the World. Join me in learning more about this great sport. #KMU
Congrats @DianaTaurasi #MambaMentality https://twitter.com/CarliLloyd/status/8610 career women's basketball coach of the year)
Congrats @Diego_Ituarte #Decima #Mexico
The answer is a healthy balance of all 👌 https://twitter.com/Diego_Ituarte/status/847280396775842097 …
As media we should hold UNDERSTANDING the psychological and physical behavior of athletes above judgment. THIS is powerful content #museon https://twitter.com/dannyyouths/status/8462757236neostate that needs to be refuted https://twitter.com/samsheffer/status/8462578676994112
====================
@KingJames I'm thinking "better not miss this chance"
“@RebelWilson: I knew I should have got a fresh hair cut for this game #countonkobe” excuse me?
“@Diehard_D: I don't think I've ever seen this ugly on the court!^_^ #countonkobe
It was a big moment for us all at #CountOnKobe day #HoF
Clutch @rachelbanham15 clutch. See you in a few!
Congratulations to my #11 Team today #WDraft11 #GOAT #luck
“@JustRu_It: @kobebryant if you guys knew each other better, what would you do?” #getem #ladder
“@FloydMayweather: @kobebryant you want to be a GM? I think you can” https://twitter.com/justuvion/status/836437011217039360 …
Haha! #fabvino #champs
Thank you all for your bday wishes #countonfans https://twitter.com/paugasol/status/
====================
###Markdown
For bulk generation, you can generate a large amount of text to a file and sort out the samples locally on your computer. The next cell will generate a generated text file with a unique timestamp.You can rerun the cells as many times as you want for even more generated texts!
###Code
gen_file = 'gpt2_gentext_{:%Y%m%d_%H%M%S}.txt'.format(datetime.utcnow())
gpt2.generate_to_file(sess,
destination_path=gen_file,
length=500,
temperature=0.7,
nsamples=100,
batch_size=20
)
# may have to run twice to get file to download
files.download(gen_file)
###Output
_____no_output_____
###Markdown
EtcIf the notebook has errors (e.g. GPU Sync Fail), force-kill the Colaboratory virtual machine and restart it with the command below:
###Code
!kill -9 -1
###Output
_____no_output_____
###Markdown
LICENSEMIT LicenseCopyright (c) 2019 Max WoolfPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.
###Code
###Output
_____no_output_____ |
code/modeling/SVM.ipynb | ###Markdown
SVM for Year 2013 STEM Class
###Code
## Create a temporary data frame for Year 2013 Term J and Term B STEM class
tempDf = df[['year','term','module_domain','code_module',
'Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region',
'South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region',
'gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','final_result','highest_education','imd_band','studied_credits']]
tempDf = tempDf.loc[(tempDf.year == 0)&(tempDf.module_domain==1)]
# Show first 5 observations of the dataset
tempDf.head(5)
X=tempDf[['term','code_module','Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region',
'South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region',
'gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','highest_education','imd_band','studied_credits']]
y=tempDf['final_result']
from sklearn.svm import LinearSVC
from sklearn.feature_selection import SelectFromModel
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
sel = SelectFromModel(LinearSVC(C=0.01, penalty="l1", dual=False,max_iter=2000))
sel.fit(X_train, y_train)
selected_feat= X_train.columns[(sel.get_support())]
len(selected_feat)
print(selected_feat)
# Define our predictors
X=tempDf[['term', 'b4_sum_clicks', 'half_sum_clicks', 'std_half_score',
'date_registration', 'module_presentation_length', 'highest_education', 'imd_band',
'studied_credits']]
y=tempDf['final_result']
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
X_train.head(5)
from sklearn.svm import LinearSVC
svc = LinearSVC(C=1e9)
svc.fit(X_train,y_train)
y_pred=svc.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred))
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
###Output
_____no_output_____
###Markdown
SVM for combined STEM and Social Science
###Code
ComtempDf = df[['year','term','code_module','module_domain','Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region','South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region','gender','disability','std_half_score','half_sum_clicks','b4_sum_clicks','age_band','module_presentation_length','num_of_prev_attempts','final_result','highest_education','imd_band','studied_credits','date_registration']]
ComtempDf = ComtempDf.loc[(ComtempDf.year == 0)]
# Show first 20 observations of the dataset, this is the combine dataset for 2013 STEM and Social Science Class
ComtempDf.head(5)
ComtempDf.count()
X=ComtempDf[['term','code_module','Scotland','East Anglian Region','London Region','South Region','North Western Region','West Midlands Region',
'South West Region','East Midlands Region','South East Region','Wales','Yorkshire Region','North Region',
'gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score','date_registration','age_band',
'module_presentation_length','num_of_prev_attempts','highest_education','imd_band','studied_credits']]
y=ComtempDf['final_result']
from sklearn.svm import LinearSVC
from sklearn.feature_selection import SelectFromModel
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
sel = SelectFromModel(LinearSVC(C=0.01, penalty="l1", dual=False,max_iter=2000))
sel.fit(X_train, y_train)
selected_feat= X_train.columns[(sel.get_support())]
len(selected_feat)
print(selected_feat)
# Define our predictors
X=tempDf[['term', 'code_module','b4_sum_clicks', 'half_sum_clicks', 'std_half_score',
'date_registration', 'module_presentation_length', 'num_of_prev_attempts','highest_education', 'imd_band',
'studied_credits']]
y=tempDf['final_result']
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Split the dataset to train and test
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25,random_state=0)
from sklearn.svm import LinearSVC
svc = LinearSVC(C=1e9)
svc.fit(X_train,y_train)
y_pred=svc.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred))
import seaborn as sns
from sklearn import metrics
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sns.heatmap(confusion_matrix, annot=True)
# Predict 2014 Results
ComtempDf2 = df[['year','term','code_module','Scotland','East Anglian Region','London Region','South Region','North Western Region',
'West Midlands Region','South West Region','East Midlands Region','South East Region','Wales',
'Yorkshire Region','North Region','gender','disability','b4_sum_clicks','half_sum_clicks','std_half_score',
'date_registration','age_band','module_presentation_length','num_of_prev_attempts','highest_education','imd_band','studied_credits','final_result']]
ComtempDf2 = ComtempDf2.loc[(ComtempDf2.year == 1)]
comtest = pd.DataFrame(ComtempDf2,columns= ['term', 'code_module','b4_sum_clicks', 'half_sum_clicks', 'std_half_score',
'date_registration', 'module_presentation_length', 'num_of_prev_attempts','highest_education', 'imd_band',
'studied_credits'])
y_pred_new = svc.predict(comtest)
y_test_new=ComtempDf2['final_result']
# Print the Accuracy
from sklearn import metrics
print('Accuracy: ',metrics.accuracy_score(y_test_new, y_pred_new))
from sklearn.metrics import classification_report
print(classification_report(y_test_new,y_pred_new))
###Output
precision recall f1-score support
0 0.88 0.81 0.85 9327
1 0.79 0.86 0.82 7465
accuracy 0.83 16792
macro avg 0.83 0.84 0.83 16792
weighted avg 0.84 0.83 0.84 16792
|
lt17_letter_combinations.ipynb | ###Markdown
https://leetcode.com/problems/letter-combinations-of-a-phone-number/
###Code
class Solution:
def letterCombinations(self, digits: str):
if not digits:
return []
num2ltrs = {
2:"abc",
3:"def",
4:"ghi",
5:"jkl",
6: "mno",
7:"pqrs",
8:"tuv",
9:"wxyz"
}
def dot_comb(l1,l2):
res = []
for i in l1:
for j in l2:
res.append(i+j)
return res
digits = list(map(int,list(digits)))
res = list(num2ltrs[digits.pop(0)])
while digits:
digit = digits.pop(0)
res = dot_comb(res,list(num2ltrs[digit]))
return res
###Output
_____no_output_____ |
deep_learning/Exercise 2/Touros - Exercise 2 (Chatbot).ipynb | ###Markdown
Import Libraries
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import nltk
import pandas
import tensorflow as tf
from tensorflow.python.keras import backend
import wget
import zipfile
import os, fnmatch
import seaborn as sns
import pickle as pkl
import random
from nltk.tokenize import word_tokenize, TweetTokenizer
tokenizer = TweetTokenizer(preserve_case = False)
from gensim.models import Word2Vec, KeyedVectors
import re
###Output
_____no_output_____
###Markdown
Download and transform the data
###Code
# Download and extract the dataset
def fetch_data(web_file, local_dir='.'):
"""Download the `web_file`, assuming it is a web resource into the local_dir.
If a file with the same filename already exists in the local directory, do not
download it but return its path instead.
Arguments:
web_file: a web resource identifiable by a url (str)
local_dir: a local directory to download the web_file into (str)
Return: The local path to the file (str)
"""
file_name = local_dir + "/" + web_file.rsplit("/",1)[-1]
if os.path.exists(file_name):
return file_name
else:
file_name = wget.download(web_file, out=local_dir)
return file_name
data_filename = fetch_data('https://s3.amazonaws.com/pytorch-tutorial-assets/cornell_movie_dialogs_corpus.zip')
with zipfile.ZipFile(data_filename, 'r') as zip_ref:
zip_ref.extractall('.\data')
# load the movie lines
movie_lines_features = ["LineID", "Character", "Movie", "Name", "Line"]
movie_lines = pd.read_csv('.\\data\\cornell movie-dialogs corpus\\movie_lines.txt',
engine = "python",
index_col = False,
sep=' \+\+\+\$\+\+\+ ',
names = movie_lines_features)
# Using only the required columns, namely, "LineID" and "Line"
movie_lines = movie_lines[["LineID", "Line"]]
# Strip the space from "LineID" for further usage and change the datatype of "Line"
movie_lines["LineID"] = movie_lines["LineID"].apply(str.strip)
movie_lines.head()
# Load the conversations file
#movie_conversations_features = ["Character1", "Character2", "Movie", "Conversation"]
#movie_conversations = pd.read_csv('.\\data\\cornell movie-dialogs corpus\\movie_conversations.txt',
# sep = "\+\+\+\$\+\+\+",
# engine = "python",
# index_col = False,
# names = movie_conversations_features)
#
# Again using the required feature, "Conversation"
# movie_conversations = movie_conversations["Conversation"]
# Preprocessing and storing the conversation data. This takes too long to run, so we saved the result as a pickle
# conversation = [[str(list(movie_lines.loc[movie_lines["LineID"] == u.strip().strip("'"), "Line"])[0]).strip() for u in c.strip().strip('[').strip(']').split(',')] for c in movie_conversations]
#with open(".\\data\\conversations.pkl", "wb") as handle:
#pkl.dump(conversation, handle)
with open(r"data\conversations.pkl", "rb") as handle:
conversation = pkl.load(handle)
conversation
###Output
_____no_output_____
###Markdown
We now have a list of conversations, ready to use. Here are some statistics on these conversations:
###Code
# Sort the sentences into questions (inputs) and answers (targets)
questions = []
answers = []
for conv in conversation:
for i in range(len(conv)-1):
questions.append(conv[i])
answers.append(conv[i+1])
# Check if we have loaded the data correctly
limit = 0
for i in range(limit, limit+5):
print(questions[i])
print(answers[i])
print()
# Compare lengths of questions and answers
print(len(questions))
print(len(answers))
def clean_text(text):
'''Clean text by removing unnecessary characters and altering the format of words.'''
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return text
# Clean the data
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
# Take a look at some of the data to ensure that it has been cleaned well.
limit = 0
for i in range(limit, limit+5):
print(clean_questions[i])
print(clean_answers[i])
print()
# Find the length of sentences
lengths = []
for question in clean_questions:
lengths.append(len(question.split()))
for answer in clean_answers:
lengths.append(len(answer.split()))
# Create a dataframe so that the values can be inspected
lengths = pd.DataFrame(lengths, columns=['counts'])
lengths.describe()
# Remove questions and answers that are shorter than 2 words and longer than 20 words.
min_line_length = 2
max_line_length = 20
# Filter out the questions that are too short/long
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
# Filter out the answers that are too short/long
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
# Compare the number of lines we will use with the total number of lines.
print("# of questions:", len(short_questions))
print("# of answers:", len(short_answers))
print("% of data used: {}%".format(round(len(short_questions)/len(questions),4)*100))
# Create a dictionary for the frequency of the vocabulary
vocab = {}
for question in short_questions:
for word in question.split():
if word not in vocab:
vocab[word] = 1
else:
vocab[word] += 1
for answer in short_answers:
for word in answer.split():
if word not in vocab:
vocab[word] = 1
else:
vocab[word] += 1
# Remove rare words from the vocabulary.
# We will aim to replace fewer than 5% of words with <UNK>
# You will see this ratio soon.
threshold = 10
count = 0
for k,v in vocab.items():
if v >= threshold:
count += 1
print("Size of total vocab:", len(vocab))
print("Size of vocab we will use:", count)
# In case we want to use a different vocabulary sizes for the source and target text,
# we can set different threshold values.
# Nonetheless, we will create dictionaries to provide a unique integer for each word.
questions_vocab_to_int = {}
word_num = 0
for word, count in vocab.items():
if count >= threshold:
questions_vocab_to_int[word] = word_num
word_num += 1
answers_vocab_to_int = {}
word_num = 0
for word, count in vocab.items():
if count >= threshold:
answers_vocab_to_int[word] = word_num
word_num += 1
# Add the unique tokens to the vocabulary dictionaries.
codes = ['<PAD>','<EOS>','<UNK>','<GO>']
for code in codes:
questions_vocab_to_int[code] = len(questions_vocab_to_int)+1
for code in codes:
answers_vocab_to_int[code] = len(answers_vocab_to_int)+1
# Create dictionaries to map the unique integers to their respective words.
# i.e. an inverse dictionary for vocab_to_int.
questions_int_to_vocab = {v_i: v for v, v_i in questions_vocab_to_int.items()}
answers_int_to_vocab = {v_i: v for v, v_i in answers_vocab_to_int.items()}
# Check the length of the dictionaries.
print(len(questions_vocab_to_int))
print(len(questions_int_to_vocab))
print(len(answers_vocab_to_int))
print(len(answers_int_to_vocab))
# Add the end of sentence token to the end of every answer.
for i in range(len(short_answers)):
short_answers[i] += ' <EOS>'
# Convert the text to integers.
# Replace any words that are not in the respective vocabulary with <UNK>
questions_int = []
for question in short_questions:
ints = []
for word in question.split():
if word not in questions_vocab_to_int:
ints.append(questions_vocab_to_int['<UNK>'])
else:
ints.append(questions_vocab_to_int[word])
questions_int.append(ints)
answers_int = []
for answer in short_answers:
ints = []
for word in answer.split():
if word not in answers_vocab_to_int:
ints.append(answers_vocab_to_int['<UNK>'])
else:
ints.append(answers_vocab_to_int[word])
answers_int.append(ints)
# Check the lengths
print(len(questions_int))
print(len(answers_int))
# Calculate what percentage of all words have been replaced with <UNK>
word_count = 0
unk_count = 0
for question in questions_int:
for word in question:
if word == questions_vocab_to_int["<UNK>"]:
unk_count += 1
word_count += 1
for answer in answers_int:
for word in answer:
if word == answers_vocab_to_int["<UNK>"]:
unk_count += 1
word_count += 1
unk_ratio = round(unk_count/word_count,4)*100
print("Total number of words:", word_count)
print("Number of times <UNK> is used:", unk_count)
print("Percent of words that are <UNK>: {}%".format(round(unk_ratio,3)))
# Sort questions and answers by the length of questions.
# This will reduce the amount of padding during training
# Which should speed up training and help to reduce the loss
sorted_questions = []
sorted_answers = []
for length in range(1, max_line_length+1):
for i in enumerate(questions_int):
if len(i[1]) == length:
sorted_questions.append(questions_int[i[0]])
sorted_answers.append(answers_int[i[0]])
print(len(sorted_questions))
print(len(sorted_answers))
print()
for i in range(3):
print(sorted_questions[i])
print(sorted_answers[i])
print()
def model_inputs():
'''Create palceholders for inputs to the model'''
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input_data, targets, lr, keep_prob
def process_encoding_input(target_data, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, sequence_length):
'''Create the encoding layer'''
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, input_keep_prob = keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, enc_state = tf.nn.bidirectional_dynamic_rnn(cell_fw = enc_cell,
cell_bw = enc_cell,
sequence_length = sequence_length,
inputs = rnn_inputs,
dtype=tf.float32)
return enc_state
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob, batch_size):
'''Decode the training data'''
attention_states = tf.zeros([batch_size, 1, dec_cell.output_size])
att_keys, att_vals, att_score_fn, att_construct_fn = \
tf.contrib.seq2seq.prepare_attention(attention_states,
attention_option="bahdanau",
num_units=dec_cell.output_size)
train_decoder_fn = tf.contrib.seq2seq.attention_decoder_fn_train(encoder_state[0],
att_keys,
att_vals,
att_score_fn,
att_construct_fn,
name = "attn_dec_train")
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
train_decoder_fn,
dec_embed_input,
sequence_length,
scope=decoding_scope)
train_pred_drop = tf.nn.dropout(train_pred, keep_prob)
return output_fn(train_pred_drop)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob, batch_size):
'''Decode the prediction data'''
attention_states = tf.zeros([batch_size, 1, dec_cell.output_size])
att_keys, att_vals, att_score_fn, att_construct_fn = \
tf.contrib.seq2seq.prepare_attention(attention_states,
attention_option="bahdanau",
num_units=dec_cell.output_size)
infer_decoder_fn = tf.contrib.seq2seq.attention_decoder_fn_inference(output_fn,
encoder_state[0],
att_keys,
att_vals,
att_score_fn,
att_construct_fn,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size,
name = "attn_dec_inf")
infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
infer_decoder_fn,
scope=decoding_scope)
return infer_logits
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, vocab_to_int, keep_prob, batch_size):
'''Create the decoding cell and input the parameters for the training and inference decoding layers'''
with tf.variable_scope("decoding") as decoding_scope:
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, input_keep_prob = keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
weights = tf.truncated_normal_initializer(stddev=0.1)
biases = tf.zeros_initializer()
output_fn = lambda x: tf.contrib.layers.fully_connected(x,
vocab_size,
None,
scope=decoding_scope,
weights_initializer = weights,
biases_initializer = biases)
train_logits = decoding_layer_train(encoder_state,
dec_cell,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob,
batch_size)
decoding_scope.reuse_variables()
infer_logits = decoding_layer_infer(encoder_state,
dec_cell,
dec_embeddings,
vocab_to_int['<GO>'],
vocab_to_int['<EOS>'],
sequence_length - 1,
vocab_size,
decoding_scope,
output_fn, keep_prob,
batch_size)
return train_logits, infer_logits
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, answers_vocab_size,
questions_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers,
questions_vocab_to_int):
'''Use the previous functions to create the training and inference logits'''
enc_embed_input = tf.contrib.layers.embed_sequence(input_data,
answers_vocab_size+1,
enc_embedding_size,
initializer = tf.random_uniform_initializer(0,1))
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob, sequence_length)
dec_input = process_encoding_input(target_data, questions_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([questions_vocab_size+1, dec_embedding_size], 0, 1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
train_logits, infer_logits = decoding_layer(dec_embed_input,
dec_embeddings,
enc_state,
questions_vocab_size,
sequence_length,
rnn_size,
num_layers,
questions_vocab_to_int,
keep_prob,
batch_size)
return train_logits, infer_logits
# Set the Hyperparameters
epochs = 100
batch_size = 128
rnn_size = 512
num_layers = 2
encoding_embedding_size = 512
decoding_embedding_size = 512
learning_rate = 0.005
learning_rate_decay = 0.9
min_learning_rate = 0.0001
keep_probability = 0.75
# Reset the graph to ensure that it is ready for training
#tf.compat.v1.reset_default_graph()
# Start the session
sess = tf.compat.v1.InteractiveSession()
# Load the model inputs
input_data, targets, lr, keep_prob = model_inputs()
# Sequence length will be the max line length for each batch
sequence_length = tf.placeholder_with_default(max_line_length, None, name='sequence_length')
# Find the shape of the input data for sequence_loss
input_shape = tf.shape(input_data)
# Create the training and inference logits
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(answers_vocab_to_int),
len(questions_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers,
questions_vocab_to_int)
# Create a tensor for the inference logits, needed if loading a checkpoint version of the model
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
indices = random.sample(range(len(conversation)), 50)
sample_context_list = []
sample_response_list = []
for index in indices:
response = clean_text(conversation[index][-1])
context = clean_text(conversation[index][0]) + "\n"
for i in range(1, len(conversation[index]) - 1):
if i % 2 == 0:
prefix = "FS: "
else:
prefix = "SS: "
context += clean_text(conversation[index][i]) + "\n"
sample_context_list.append(context)
sample_response_list.append(response)
#with open("cornell_movie_dialogue_sample.csv", "w") as handle:
# for c, r in zip(sample_context_list, sample_response_list):
# handle.write('"' + c + '"' + "#" + r + "\n")
sample_context_list
sample_response_list
###Output
_____no_output_____ |
rgb_model.ipynb | ###Markdown
Next two cells are only needed for a Google Colab environment.
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd '/content/drive/My Drive/CodingProjects/skateboard_trick_classification'
import numpy as np
from keras.callbacks import EarlyStopping
from keras.layers import concatenate, Dense, Dropout, GlobalAveragePooling3D, Softmax
from keras.models import load_model, Model
from keras.optimizers import Adam
from sklearn.metrics import classification_report, confusion_matrix
from utils import config
from utils.data_generator import DataGenerator
from utils.i3d_inception import Inception_Inflated3d
###Output
_____no_output_____
###Markdown
Data Generators
###Code
training_generator = DataGenerator(config.VIDEO_TRAINING_DIR,
config.RGB_TRAINING_BATCH_SIZE,
is_training=True, rgb_data_only=True)
validation_generator = DataGenerator(config.VIDEO_VALIDATION_DIR,
config.RGB_VALIDATION_BATCH_SIZE,
is_training=False, rgb_data_only=True)
test_generator = DataGenerator(config.VIDEO_TEST_DIR,
config.RGB_TEST_BATCH_SIZE,
is_training=False, rgb_data_only=True)
###Output
_____no_output_____
###Markdown
Train RGB Model
###Code
input_shape = (None, config.RGB_FRAME_HEIGHT, config.RGB_FRAME_WIDTH, config.CHANNELS)
i3d_model = Inception_Inflated3d(include_top=False, weights='rgb_imagenet_and_kinetics',
input_shape=input_shape,
classes=config.RGB_N_CLASSES)
x = Dropout(0.5)(i3d_model.output)
x = Dense(config.RGB_N_CLASSES, name='rgb_predictions')(x)
output = Softmax()(x)
model = Model(inputs=i3d_model.input, outputs=output)
model.compile(optimizer=Adam(lr=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
early_stopping = EarlyStopping(patience=4, restore_best_weights=True, verbose=1)
model.fit_generator(training_generator,
epochs=100,
validation_data=validation_generator,
callbacks=[early_stopping])
# recompiling the model will reduce the size of the saved model by resetting the optimizer state
model.compile(optimizer=Adam(lr=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.save(config.RGB_MODEL_FILEPATH)
###Output
_____no_output_____
###Markdown
Evaluate Model
###Code
model = load_model(config.RGB_MODEL_FILEPATH)
###Output
_____no_output_____
###Markdown
Validation Set
###Code
y_true = validation_generator.labels
predictions = model.predict_generator(validation_generator)
y_pred = np.argmax(predictions, axis=1)
report = classification_report(y_true, y_pred,
target_names=config.RGB_CLASS_NAMES,
digits=4)
print(report)
print(confusion_matrix(y_true, y_pred))
###Output
precision recall f1-score support
kickflip 0.5714 0.5581 0.5647 43
360_kickflip 0.6154 0.5714 0.5926 42
50-50 0.4800 0.5333 0.5053 45
nosegrind 0.4043 0.4419 0.4222 43
boardslide 0.6216 0.5476 0.5823 42
tailslide 0.5833 0.6087 0.5957 46
fail 0.8718 0.8293 0.8500 41
accuracy 0.5828 302
macro avg 0.5925 0.5843 0.5875 302
weighted avg 0.5897 0.5828 0.5853 302
[[24 10 2 4 2 1 0]
[12 24 0 5 0 0 1]
[ 1 0 24 5 5 9 1]
[ 3 0 10 19 4 4 3]
[ 0 0 9 4 23 6 0]
[ 1 0 5 9 3 28 0]
[ 1 5 0 1 0 0 34]]
###Markdown
Test Set
###Code
y_true = test_generator.labels
predictions = model.predict_generator(test_generator)
y_pred = np.argmax(predictions, axis=1)
report = classification_report(y_true, y_pred,
target_names=config.RGB_CLASS_NAMES,
digits=4)
print(report)
print(confusion_matrix(y_true, y_pred))
###Output
precision recall f1-score support
kickflip 0.7143 0.4000 0.5128 25
360_kickflip 0.5909 0.5200 0.5532 25
50-50 0.4500 0.3600 0.4000 25
nosegrind 0.3611 0.5200 0.4262 25
boardslide 0.3667 0.4400 0.4000 25
tailslide 0.3750 0.4800 0.4211 25
fail 1.0000 0.8400 0.9130 25
accuracy 0.5086 175
macro avg 0.5511 0.5086 0.5180 175
weighted avg 0.5511 0.5086 0.5180 175
[[10 7 0 4 1 3 0]
[ 4 13 1 3 2 2 0]
[ 0 0 9 2 5 9 0]
[ 0 1 4 13 6 1 0]
[ 0 0 5 5 11 4 0]
[ 0 0 1 9 3 12 0]
[ 0 1 0 0 2 1 21]]
|
legacy/Getting Meta with Big Data Malaysia.ipynb | ###Markdown
Getting Meta with Big Data MalaysiaScraping the Big Data Malaysia Facebook group for fun. Profit unlikely. Hello WorldThis is an introductory-level notebook demonstrating how to deal with a small, but meaty dataset. Things we will do here include:* Loading a JSON dataset.* Dealing with a minor data quality issue.* Handling timestamps.* Dataset slicing and dicing.* Plotting histograms.A "follow the data" approach will be taken. This notebook may appear quite long, but a good portion of the length is pretty-printing of raw data which noone is expected to read in entirety, but it's there for one to skim to get an idea of the structure of our data. Get all the dataThis notebook assumes you have already prepared a flattened JSON file into `all_the_data.json`, which you would have done by:* Writing your oauth token into `oauth_file` according to the instructions in `pull_feed.py`.* Running `python pull_feed.py` to pull down the feed pages into the BigDataMyData directory.* Running `python flatten_saved_data.py > all_the_data.json`.
###Code
# we need this for later:
%matplotlib inline
import json
INPUT_FILE = "all_the_data.json"
with open(INPUT_FILE, "r") as big_data_fd:
big_data = json.load(big_data_fd)
###Output
_____no_output_____
###Markdown
Is it big enough?Now we have all our data loaded into variable `big_data`, but can we really say it's Big Data?
###Code
print "We have {} posts".format(len(big_data))
###Output
We have 1946 posts
###Markdown
Wow! So data! Very big!Seriously though... it's not big. In fact it's rather small. How small is small? Here's a clue...
###Code
import os
print "The source file is {} bytes. Pathetic.".format(os.stat(INPUT_FILE).st_size)
###Output
The source file is 3773450 bytes. Pathetic.
###Markdown
At the time this was written, the file was just about 3MB, and there were fewer than 2k posts... note that excludes comments made on posts, but still, this stuff is small. It is small enough that at no point do we need to do anything clever from a data indexing/caching/storage perspective, so to start we will take the simplistic but often appropriate approach of slicing and dicing our `big_data` object directly. Later on we'll get into `pandas` `DataFrame` objects.Anyway, size doesn't matter. It's *variety* that counts. Fields of goldNow we know how many elements (rows I guess?) we have, but how much variety do we have in this data? One measure of this may be to look at the number of fields in each of those items:
###Code
import itertools
all_the_fields = set(itertools.chain.from_iterable(big_data))
print "We have {} different field names:".format(len(all_the_fields))
print all_the_fields
###Output
We have 30 different field names:
set([u'application', u'actions', u'likes', u'created_time', u'message', u'id', u'story', u'from', u'subscribed', u'privacy', u'comments', u'shares', u'to', u'story_tags', u'type', u'status_type', u'picture', u'description', u'object_id', u'link', u'properties', u'icon', u'name', u'message_tags', u'with_tags', u'updated_time', u'caption', u'place', u'source', u'is_hidden'])
###Markdown
Are we missing anything? A good way to sanity check things is to actually inspect the data, so let's look at a random item:
###Code
import random
import pprint
# re-run this as much as you like to inspect different items
pprint.pprint(random.choice(big_data))
###Output
{u'actions': [{u'link': u'https://www.facebook.com/497068793653308/posts/961960940497422',
u'name': u'Comment'},
{u'link': u'https://www.facebook.com/497068793653308/posts/961960940497422',
u'name': u'Like'},
{u'link': u'/groups/bigdatamy/', u'name': u'Create Group Chat'}],
u'application': {u'id': u'183319479511',
u'name': u'Hootsuite',
u'namespace': u'hootsuiteprod'},
u'created_time': u'2014-10-21T02:15:17+0000',
u'from': {u'id': u'10152418624011789', u'name': u'John F.X. Berns'},
u'id': u'497068793653308_961960940497422',
u'is_hidden': False,
u'message': u'Hadoop World: The executive dashboard is on the way out - http://ow.ly/D3eF1',
u'privacy': {u'allow': u'',
u'deny': u'',
u'description': u'',
u'friends': u'',
u'value': u''},
u'to': {u'data': [{u'id': u'497068793653308',
u'name': u'Big Data Malaysia'}]},
u'type': u'status',
u'updated_time': u'2014-10-21T02:15:17+0000'}
###Markdown
From that you should be able to sense that we are missing some things - it isn't simply that there are some number of fields that describe each item, because some of those fields have data hierarchies beneath them, for example:
###Code
pprint.pprint(big_data[234])
###Output
{u'actions': [{u'link': u'https://www.facebook.com/497068793653308/posts/1032324310127751',
u'name': u'Comment'},
{u'link': u'https://www.facebook.com/497068793653308/posts/1032324310127751',
u'name': u'Like'},
{u'link': u'/groups/bigdatamy/', u'name': u'Create Group Chat'}],
u'comments': [{u'data': [{u'can_remove': True,
u'created_time': u'2015-02-02T14:20:46+0000',
u'from': {u'id': u'10203864949854090',
u'name': u'Teuku Faruq'},
u'id': u'1033140356712813',
u'like_count': 1,
u'message': u'Interesting startup, all the best!',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-04T07:45:13+0000',
u'from': {u'id': u'10203477707997024',
u'name': u'Syed Ahmad Fuqaha'},
u'id': u'1034073379952844',
u'like_count': 0,
u'message': u'Thank you Teuku Faruq!',
u'message_tags': [{u'id': u'10203864949854090',
u'length': 11,
u'name': u'Teuku Faruq',
u'offset': 10,
u'type': u'user'}],
u'user_likes': False}],
u'paging': {u'cursors': {u'after': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEF6TkRBM016TTNPVGsxTWpnME5Eb3hOREl6TURNMU9URXo=',
u'before': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEF6TXpFME1ETTFOamN4TWpneE16b3hOREl5T0RnMk9EUTI='}}}],
u'created_time': u'2015-02-01T05:42:29+0000',
u'from': {u'id': u'10203477707997024', u'name': u'Syed Ahmad Fuqaha'},
u'id': u'497068793653308_1032324310127751',
u'is_hidden': False,
u'likes': {u'data': [{u'id': u'10202838533911870',
u'name': u'Firdaus Adib'},
{u'id': u'10203864949854090', u'name': u'Teuku Faruq'},
{u'id': u'10152208407631596',
u'name': u'Bok Cabradilla'},
{u'id': u'10152535806106552', u'name': u'Brian Ho'},
{u'id': u'10152539541773459', u'name': u'Mohd Naim'},
{u'id': u'10152629084603737', u'name': u'Tajul Azhar'},
{u'id': u'10152569565771111',
u'name': u'Daniel Walters'},
{u'id': u'10154115325260227',
u'name': u'AWoon Haw Brando'},
{u'id': u'10204389172318345',
u'name': u'Fairul Syarmil'},
{u'id': u'10152528794631844',
u'name': u'Sandra Hanchard'}],
u'paging': {u'cursors': {u'after': u'MTAxNTI1Mjg3OTQ2MzE4NDQ=',
u'before': u'MTAyMDI4Mzg1MzM5MTE4NzA='}}},
u'message': u'Thank you for the approval. Im part of www.katsana.com, a local startup specializing in GPS tracking & fleet management system. Hope to be able to contribute to this group and learn from the masters.',
u'privacy': {u'allow': u'',
u'deny': u'',
u'description': u'',
u'friends': u'',
u'value': u''},
u'to': {u'data': [{u'id': u'497068793653308',
u'name': u'Big Data Malaysia'}]},
u'type': u'status',
u'updated_time': u'2015-02-04T07:45:13+0000'}
###Markdown
From that we can see some fields have hierarchies within them, e.g. likes have a list of id dictionaries, which happen to be relatively trivial (names and ids... I wonder why Facebook didn't just post the id and make you look up the name?) but the comment field is a bit more complex, wherein it contains a list of dictionaries with each field potentially being a dictionary of its own, e.g. we can see that the second comment on that post tagged Teuku Faruq:
###Code
pprint.pprint(big_data[234]['comments'][0]['data'][1]['message_tags'])
###Output
[{u'id': u'10203864949854090',
u'length': 11,
u'name': u'Teuku Faruq',
u'offset': 10,
u'type': u'user'}]
###Markdown
Data quality annoyancesActually I'm not even sure why the `comments` field is a single entry list. Is that always the case?
###Code
set([len(data['comments']) for data in big_data if 'comments' in data])
###Output
_____no_output_____
###Markdown
Apparently that's not always the case, sometimes there are 2 items in the list, let's see what that looks like...
###Code
multi_item_comment_lists = [data['comments'] for data in big_data if ('comments' in data) and (len(data['comments']) > 1)]
print len(multi_item_comment_lists)
pprint.pprint(multi_item_comment_lists[0])
###Output
4
[{u'data': [{u'can_remove': True,
u'created_time': u'2015-02-27T03:39:29+0000',
u'from': {u'id': u'10152465206977702', u'name': u'Peter Ho'},
u'id': u'1049191648441017',
u'like_count': 0,
u'message': u'Peter the slide share has 404 message?',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:43:23+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049192758440906',
u'like_count': 0,
u'message': u'Works for me',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:43:46+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049192845107564',
u'like_count': 0,
u'message': u'works from side too Peter Ho',
u'message_tags': [{u'id': u'10152465206977702',
u'length': 8,
u'name': u'Peter Ho',
u'offset': 20,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:44:16+0000',
u'from': {u'id': u'10152465206977702', u'name': u'Peter Ho'},
u'id': u'1049193048440877',
u'like_count': 0,
u'message': u'Must be me then',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:47:59+0000',
u'from': {u'id': u'10100974540589758',
u'name': u'Daniel Jean-Pierre Riveong'},
u'id': u'1049194295107419',
u'like_count': 1,
u'message': u'Slideshare link doesn\'t work for me. It looks like so: "http://www.slideshare.net/p\u2026/big-data-week-kuala-lumpur-2015"',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:48:21+0000',
u'from': {u'id': u'10100974540589758',
u'name': u'Daniel Jean-Pierre Riveong'},
u'id': u'1049194378440744',
u'like_count': 1,
u'message': u'Clicking on the image works though.',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T03:55:29+0000',
u'from': {u'id': u'10152415710124319',
u'name': u'Ng Swee Meng'},
u'id': u'1049196715107177',
u'like_count': 0,
u'message': u'It feels different from the first one...',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T04:06:48+0000',
u'from': {u'id': u'10202993457636721',
u'name': u'Balaganesh Latchmanan'},
u'id': u'1049199875106861',
u'like_count': 0,
u'message': u'Murali Shankar',
u'message_tags': [{u'id': u'10152872458587148',
u'length': 14,
u'name': u'Murali Shankar',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T04:19:25+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049203205106528',
u'like_count': 1,
u'message': u"Ah, right, yes, the link in Peter's message somehow got mangled, but the link that Facebook extracted into the preview image does work :) so, just click on the image.",
u'message_tags': [{u'id': u'10152934839784580',
u'length': 5,
u'name': u'Peter',
u'offset': 28,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T04:27:31+0000',
u'from': {u'id': u'10152528794631844',
u'name': u'Sandra Hanchard'},
u'id': u'1049206171772898',
u'like_count': 1,
u'message': u'Here is the link again: http://www.slideshare.net/petekua/big-data-week-kuala-lumpur-2015',
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-27T04:36:10+0000',
u'from': {u'id': u'10152322247874367',
u'name': u'Heislyc Loh'},
u'id': u'1049208688439313',
u'like_count': 0,
u'message': u'Support!',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T04:46:50+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049211788439003',
u'like_count': 1,
u'message': u'I corrected the mangled link :p',
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-27T05:01:58+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049216228438559',
u'like_count': 1,
u'message': u'btw the 2-day expo is FOC',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T06:39:24+0000',
u'from': {u'id': u'10155151622655483',
u'name': u'Norhidayah Azman'},
u'id': u'1049240855102763',
u'like_count': 0,
u'message': u'how do we send proposals for thu/fri? any example proposals we could use as a guide?',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T09:26:37+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049283101765205',
u'like_count': 3,
u'message': u"Norhidayah Azman we don't have a specific proposal template as it is free form. you can organize a bda workshop, demo, tutorial, hackathon, etc. let us know about it and we will work with you to make sure it gets maximim exposure and is a success.",
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 0,
u'type': u'user'}],
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-27T09:40:43+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049286985098150',
u'like_count': 0,
u'message': u'Any opportunity for the researcher? i.e to discuss findings etc?',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T16:28:55+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049460655080783',
u'like_count': 1,
u'message': u"S S Mohd Fauzi last year UTAR organised a data mining workshop: http://www.utar.edu.my/econtent_sub.jsp?fcatid=16&fcontentid=10554. IMHO there is certainly room for an academic component to the week. I don't know if UTAR will be doing it again, but I hope some academic institution will do something.",
u'message_tags': [{u'id': u'1102366916447454',
u'length': 14,
u'name': u'S S Mohd Fauzi',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T16:48:26+0000',
u'from': {u'id': u'10155151622655483',
u'name': u'Norhidayah Azman'},
u'id': u'1049468408413341',
u'like_count': 4,
u'message': u"Funny u should say that Tirath :D\nI'm a lecturer at USIM, Nilai, and I was thinking exactly along those lines - adding an academic component to BDW :)\nI went to last year's BDW, and I'm keen to setup an event this year, but I'm a Big Data newbie myself! And we're a bit short on staff who can lead workshops and such. So I'm still mulling how best to proceed. I'm thinking a discussion panel of sorts. What would you guys like to see for an academic component to BDW?",
u'message_tags': [{u'id': u'10152075362431725',
u'length': 6,
u'name': u'Tirath',
u'offset': 24,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-27T23:06:38+0000',
u'from': {u'id': u'10152528794631844',
u'name': u'Sandra Hanchard'},
u'id': u'1049615545065294',
u'like_count': 1,
u'message': u'There were at least 4 Unis hosting events last year (UTAR, Taylors, Sunway, MMU) if I recall correctly. Norhidayah Azman how about a panel debating ethics / privacy / surveillance concerns of big data? Or access to big data by humanities fields / digitial methods / skill gaps / complementing big data with small data / societal problems that can or cant be addressed? Depends on background of your department I guess :)',
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 104,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:07:17+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049636835063165',
u'like_count': 3,
u'message': u"I didn't attend the UTAR event, but from some feedback and their own report it is evident that the academics got an opportunity to present some of their students research findings (though I have no idea if they had a CFP or how exactly they picked speakers), so it was a good academic event, and Sandra is quite right, other academic institutions organised other things too, including Taylors. Taylors and UTAR both featured international speakers as well. MMU I believe hosted a hackathon, which they graciously opened to the public. There were so many things going on during BDW'14 that I might have forgotten some :)\n\nI think BDW would be a good time to organise such an academic event because you will get free promotional support, and also there will be some international guests who can attend your event. That said, I know the logistics are not trivial, so better start on it right now! In BDW'13 also there was interest in creating such a track, but it just could not come together due to the challenge of logistics.\n\nNorhidayah Azman if you are interested, I strongly urge you to chat with Peter Kua immediately to explore the idea - even if in the end tak jadi never mind, the important thing is to start very soon or there will be no way it can happen. By involving MDeC from the start at least they can reduce the odds of clashes for the time slot, and link you up with other interested parties.",
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 1026,
u'type': u'user'},
{u'id': u'10152934839784580',
u'length': 9,
u'name': u'Peter Kua',
u'offset': 1099,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:32:58+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049645301728985',
u'like_count': 2,
u'message': u'Norhidayah Azman, you can download the BDWKL2014 post-mortem report and read about the various events hosted by the 4 universities. The link: http://bigdataanalytics.my/downloads/BDW2014-Post-Event-Report.pdf. We will be more than happy to work with you on your idea.',
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 0,
u'type': u'user'}],
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:37:21+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049646511728864',
u'like_count': 3,
u'message': u'Yeah, why not we organize BD workshop and do CFP (it is some kind of mini seminar to present findings related to big data). It is a good start to engage academicians with industry...',
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:37:21+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1049646515062197',
u'like_count': 1,
u'message': u'Tang Wern Tien',
u'message_tags': [{u'id': u'10153393479107259',
u'length': 14,
u'name': u'Tang Wern Tien',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:45:50+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049649165061932',
u'like_count': 0,
u'message': u"S S Mohd Fauzi that would be great, but I am mindful of the fact that there's not much time left, so maybe the CFP could be limited to abstracts for posters from students, with invited speakers? Or some other configuration...",
u'message_tags': [{u'id': u'1102366916447454',
u'length': 14,
u'name': u'S S Mohd Fauzi',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T00:51:42+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049652655061583',
u'like_count': 2,
u'message': u'Yes, considering the time left, CFP for abstract, posters would be great. But, where to start? I can come out with CFP.',
u'user_likes': True}],
u'paging': {u'cursors': {u'after': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEEwT1RZMU1qWTFOVEEyTVRVNE16b3hOREkxTURnME56QXk=',
u'before': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEEwT1RFNU1UWTBPRFEwTVRBeE56b3hOREkxTURBNE16WTU='},
u'next': u'https://graph.facebook.com/v2.0/497068793653308_1049188861774629/comments?access_token=CAACEdEose0cBAAe7p7jqgiBC2WSxdVY24YgpnNob6a7fPEMP16LZBP2JUP99n7xqpu3C84g4X6pJ932ZA6JYwtHES6DfhKxKexqUkhdIYpU0ocN0Wozah4mzdtlTNhjwGj6xcPobZCvQbvLSIERbKFFtg2NpLyF6VCCe75U5oZBVsjzxxZAKC1c0CCwvGkGJUZAbKz6VzS3jnqckrwUfg7dGzQLnXQzU4ZD%0A&limit=25&after=WTI5dGJXVnVkRjlqZFhKemIzSTZNVEEwT1RZMU1qWTFOVEEyTVRVNE16b3hOREkxTURnME56QXk%3D'}},
{u'data': [{u'can_remove': True,
u'created_time': u'2015-02-28T01:16:19+0000',
u'from': {u'id': u'10152322247874367',
u'name': u'Heislyc Loh'},
u'id': u'1049659495060899',
u'like_count': 0,
u'message': u"What's CFP?",
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T01:17:37+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049660125060836',
u'like_count': 1,
u'message': u'Its is call for paper (CFP)',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T01:27:27+0000',
u'from': {u'id': u'10152322247874367',
u'name': u'Heislyc Loh'},
u'id': u'1049663078393874',
u'like_count': 2,
u'message': u'[yet to be concrete] I\'d like to propose a pre-hackathon workshop, to work with data scientist and practitioners, data source providers and custodian (public & private), to work towards crunching / capturing / collecting a sample data sets, to prep. upcoming hackathon, e.g. AngelHack (June), BDA (MDeC-Q4) etc. Don\'t have a framework yet, would like to invite inputs.\n\nRational:\n1) You can\'t just go straight into hackathon without much prep works for Big Data project, previous outcome doesn\'t seems impactful enough to me\n\n2) It is a common challenge to look for relevant data sets for the purpose of hackathon\n\n3) Hackathon is essentially a project-based leaning activity\n\n4) I think this is useful to attract more data-driven developer and business manager interest\n\nIn a nutshell, we could see this as a "dataset prep. workshop"',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T01:36:11+0000',
u'from': {u'id': u'10152528794631844',
u'name': u'Sandra Hanchard'},
u'id': u'1049665565060292',
u'like_count': 2,
u'message': u"Great idea Heislyc Loh In any quant analysis, a significant amount of work always goes into data preparation & it's important for quality of outputs/insights. If the workshop included showcasing a number of tools that can expedite data preparation; as well as discussion of sensitivities of using third-party data sources - that would be great contribution.",
u'message_tags': [{u'id': u'10152322247874367',
u'length': 11,
u'name': u'Heislyc Loh',
u'offset': 11,
u'type': u'user'}],
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-28T02:41:23+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1049687728391409',
u'like_count': 1,
u'message': u'S S Mohd Fauzi I think the starting point should be to figure out logistics options, in particular what suitable venues might be available when. That will be sufficient information for the initial CFP (e.g. if venue options are all in KL you can say "venue TBD but within Kuala Lumpur", and lock in a specific date as well), later you can update the CFP with the specific location and time, and then work out invited speaker spots, minimal F&B, and sponsorship requirements... best to discuss with MDeC and other who have expressed interest above!\n\nHeislyc Loh the pre-hackathon workshop idea is definitely a good one, as you say it\'s an opportunity to pick up skills but also to form teams. It is something we tried to do in BDW\'13, but unfortunately that year we decided to abort the hackathon idea because there was too much uncertainty around the date of the General Election (it looked like the GE could fall on the weekend we were planning to have the hackathon... turned out to be the week later, but we didn\'t know that in time). Despite dropping the hackathon, the workshop still proceeded, thanks to Ng Swee Meng.\n\nAll good ideas everyone, you should push through with them! I wish I could help out more, but alas this year I am squarely in the NATO* category because I will most likely be overseas for the entire period. (*No Action, Talk Only :P)',
u'message_tags': [{u'id': u'1102366916447454',
u'length': 14,
u'name': u'S S Mohd Fauzi',
u'offset': 0,
u'type': u'user'},
{u'id': u'10152322247874367',
u'length': 11,
u'name': u'Heislyc Loh',
u'offset': 549,
u'type': u'user'},
{u'id': u'10152415710124319',
u'length': 12,
u'name': u'Ng Swee Meng',
u'offset': 1110,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T02:57:01+0000',
u'from': {u'id': u'10155151622655483',
u'name': u'Norhidayah Azman'},
u'id': u'1049696108390571',
u'like_count': 3,
u'message': u"My department does Information Security and Assurance, so we can gladly organize a panel on security/ethics/privacy/surveillance :D\nI'm aware that there were other security tracks at BDW14, will there be any clashes this year?\n\nAbt CFPs, I'm open to inviting researchers to present their work - S S Mohd Fauzi I could ask my bosses if we could host your CFP. But it's unlikely the papers will be peer reviewed, let alone get published/indexed. Will this be ok?",
u'message_tags': [{u'id': u'1102366916447454',
u'length': 14,
u'name': u'S S Mohd Fauzi',
u'offset': 295,
u'type': u'user'}],
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-02-28T03:00:39+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049699815056867',
u'like_count': 0,
u'message': u'That would be great a great start...',
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-02-28T03:01:12+0000',
u'from': {u'id': u'1102366916447454',
u'name': u'S S Mohd Fauzi'},
u'id': u'1049701061723409',
u'like_count': 1,
u'message': u'Norhidayah Azman',
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-03-02T06:42:36+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1050910038269178',
u'like_count': 0,
u'message': u"Perhaps you could do a basic peer review just to ensure relevance, but maybe save novelty+impact scoring for next time. Since reviews can be done remotely I'd be happy to assist with that if it would help (I used to review CS papers years ago - rusty now, but hopefully a bit like riding a bike).",
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-03-02T11:37:18+0000',
u'from': {u'id': u'10152184193128773',
u'name': u'Norashikin Abdul Hamid'},
u'id': u'1051069438253238',
u'like_count': 1,
u'message': u"what about free opportunity for newbies/startups to try out all the different tools related to big data and vendors providing support? \nHi Peter Kua! Didn't get to catch up with you further on this topic :)",
u'message_tags': [{u'id': u'10152934839784580',
u'length': 9,
u'name': u'Peter Kua',
u'offset': 139,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-03-03T14:40:26+0000',
u'from': {u'id': u'10155151622655483',
u'name': u'Norhidayah Azman'},
u'id': u'1051719904854858',
u'like_count': 1,
u'message': u"Sorry guys but it's a no go on my side - the date's too soon and there's too much paperwork to get everything done in time! :(\nHowever, I'm very happy to pitch in and collaborate if anybody needs a hand with their events!",
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-03-04T07:19:11+0000',
u'from': {u'id': u'10152075362431725',
u'name': u'Tirath Ramdas'},
u'id': u'1052149544811894',
u'like_count': 0,
u'message': u"Thanks for trying anyway Norhidayah Azman. I hope you and others are not too discouraged - although you are right that it will be very hard to organize in time for BDW'15, perhaps it could be organized as a standalone event slated for a few months after BDW'15? Say a small workshop with a minimal CFP component, then with more forward planning a full conference to coincide with BDW'16?",
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 25,
u'type': u'user'}],
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-03-04T12:14:17+0000',
u'from': {u'id': u'10155151622655483',
u'name': u'Norhidayah Azman'},
u'id': u'1052237138136468',
u'like_count': 2,
u'message': u"Sounds good :) Do we already hv the dates for BDW'16? Gonna need to write up a kertas kerja asap :P",
u'user_likes': True},
{u'can_remove': True,
u'created_time': u'2015-03-05T12:23:20+0000',
u'from': {u'id': u'10153393479107259',
u'name': u'Tang Wern Tien'},
u'id': u'1052813858078796',
u'like_count': 4,
u'message': u"Hi guys, If you are interested to explore on the prospect of hosting a partner event in this year's BDW, please drop me an email asap with your contact details and we can discuss from there. My email is [email protected].",
u'user_likes': False},
{u'can_remove': True,
u'created_time': u'2015-03-06T03:45:25+0000',
u'from': {u'id': u'10152934839784580', u'name': u'Peter Kua'},
u'id': u'1053171694709679',
u'like_count': 1,
u'message': u'Norhidayah Azman, no concrete dates for BDW16 yet, but it is usually held end Q1 or beginning Q2 :)',
u'message_tags': [{u'id': u'10155151622655483',
u'length': 16,
u'name': u'Norhidayah Azman',
u'offset': 0,
u'type': u'user'}],
u'user_likes': False}],
u'paging': {u'cursors': {u'after': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEExTXpFM01UWTVORGN3T1RZM09Ub3hOREkxTmpFek5USTE=',
u'before': u'WTI5dGJXVnVkRjlqZFhKemIzSTZNVEEwT1RZMU9UUTVOVEEyTURnNU9Ub3hOREkxTURnMk1UYzU='},
u'previous': u'https://graph.facebook.com/v2.0/497068793653308_1049188861774629/comments?limit=25&access_token=CAACEdEose0cBAAe7p7jqgiBC2WSxdVY24YgpnNob6a7fPEMP16LZBP2JUP99n7xqpu3C84g4X6pJ932ZA6JYwtHES6DfhKxKexqUkhdIYpU0ocN0Wozah4mzdtlTNhjwGj6xcPobZCvQbvLSIERbKFFtg2NpLyF6VCCe75U5oZBVsjzxxZAKC1c0CCwvGkGJUZAbKz6VzS3jnqckrwUfg7dGzQLnXQzU4ZD%0A&before=WTI5dGJXVnVkRjlqZFhKemIzSTZNVEEwT1RZMU9UUTVOVEEyTURnNU9Ub3hOREkxTURnMk1UYzU%3D'}}]
###Markdown
Skimming the above it looks as though very long comment threads are split into multiple "pages" in the `comments` list. This may be an artifact of the paging code in `pull_feed.py`, which is not ideal. At some point we may fix it there, but for the time being we'll just consider it a data quality inconvenience that we will have to deal with.Here's a function to work around this annoyance:
###Code
def flatten_comments_pages(post):
flattened_comments = []
for page in post:
flattened_comments += page['data']
return flattened_comments
post_comments_paged = multi_item_comment_lists[0]
print "Post has {} comments".format(len(flatten_comments_pages(post_comments_paged)))
###Output
Post has 40 comments
###Markdown
Start plotting things already dammitNow that we're counting comments, it's natural to ask: what does the number-of-comments-per-post distribution look like? **IMPORTANT NOTE**: Beyond this point, we start to "follow the data" as we analyse things, and we do so in a time-relative way (e.g. comparing the last N days of posts to historical data). As Big Data Malaysia is a living breathing group, the data set is a living breathing thing, so things may change, and the conclusions informing the analysis here may suffer *logic rot*.
###Code
comments_threads = [data['comments'] for data in big_data if 'comments' in data]
count_of_posts_with_no_comments = len(big_data) - len(comments_threads)
comments_counts = [0] * count_of_posts_with_no_comments
comments_counts += [len(flatten_comments_pages(thread)) for thread in comments_threads]
import matplotlib.pyplot as plt
plt.hist(comments_counts, bins=max(comments_counts))
plt.title("Comments-per-post Histogram")
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____
###Markdown
This sort of adds up intuitively; posts with long comment threads will be rare, though from experience with this forum it does not seem right to conclude that there is a lot of posting going on with no interaction... the community is a bit more engaged than that. But since this is Facebook, comments aren't the only way of interacting with a post. There's also the wonderful 'Like'.
###Code
likes_threads = [data['likes']['data'] for data in big_data if 'likes' in data]
count_of_posts_with_no_likes = len(big_data) - len(likes_threads)
likes_counts = [0] * count_of_posts_with_no_likes
likes_counts += [len(thread) for thread in likes_threads]
plt.hist(likes_counts, bins=max(likes_counts))
plt.title("Likes-per-post Histogram")
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____
###Markdown
Note that the above does not include Likes on Comments made on posts; only Likes made on posts themselves are counted.While this paints the picture of a more engaged community, it still doesn't feel quite right. It seems unusual these days to find a post go by without a Like or two.I have a hunch that the zero-like posts are skewed a bit to the earlier days of the group. To dig into that we'll need to start playing with timestamps. Personally I prefer to deal with time as UTC epoch seconds, and surprisingly it seems I need to write my own helper function for this.
###Code
import datetime
import dateutil
import pytz
def epoch_utc_s(date_string):
dt_local = dateutil.parser.parse(str(date_string))
dt_utc = dt_local.astimezone(pytz.utc)
nineteenseventy = datetime.datetime(1970,1,1)
epoch_utc = dt_utc.replace(tzinfo=None) - nineteenseventy
return int(epoch_utc.total_seconds())
posts_without_likes = [data for data in big_data if 'likes' not in data]
posts_with_likes = [data for data in big_data if 'likes' in data]
timestamps_of_posts_without_likes = [epoch_utc_s(post['created_time']) for post in posts_without_likes]
timestamps_of_posts_with_likes = [epoch_utc_s(post['created_time']) for post in posts_with_likes]
import numpy
median_epoch_liked = int(numpy.median(timestamps_of_posts_with_likes))
median_epoch_non_liked = int(numpy.median(timestamps_of_posts_without_likes))
print "Median timestamp of posts without likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_non_liked),
median_epoch_non_liked)
print "Median timestamp of posts with likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_liked),
median_epoch_liked)
###Output
Median timestamp of posts without likes: 2014-04-25 03:08:38 (1398359318)
Median timestamp of posts with likes: 2014-08-29 03:13:29 (1409246009)
###Markdown
In general it seems my hunch may have been right, but it will be clearer if we plot it.
###Code
plt.hist(timestamps_of_posts_without_likes, alpha=0.5, label='non-Liked posts')
plt.hist(timestamps_of_posts_with_likes, alpha=0.5, label='Liked posts')
plt.title("Liked vs non-Liked posts")
plt.xlabel("Time (epoch UTC s)")
plt.ylabel("Count of posts")
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
This is looking pretty legit now. We can see that lately there's been a significant uptick in the number of posts, and an uptick in the ratio of posts that receive at least one Like.As another sanity check, we can revisit the *Likes-per-post Histogram*, but only include recent posts. While we're at it we might as well do the same for the *Comments-per-post Histogram*.
###Code
def less_than_n_days_ago(date_string, n):
query_date = epoch_utc_s(date_string)
today_a_year_ago = epoch_utc_s(datetime.datetime.now(pytz.utc) - datetime.timedelta(days=n))
return query_date > today_a_year_ago
# try changing this variable then re-running this cell...
days_ago = 30
# create a slice of our big_data containing only posts created n days ago
recent_data = [data for data in big_data if less_than_n_days_ago(data['created_time'], days_ago)]
# plot the Likes-per-post Histogram for recent_data
recent_likes_threads = [data['likes']['data'] for data in recent_data if 'likes' in data]
recent_count_of_posts_with_no_likes = len(recent_data) - len(recent_likes_threads)
recent_likes_counts = [0] * recent_count_of_posts_with_no_likes
recent_likes_counts += [len(thread) for thread in recent_likes_threads]
plt.hist(recent_likes_counts, bins=max(recent_likes_counts))
plt.title("Likes-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
# plot the Comment-per-post Histogram for recent_data
recent_comments_threads = [data['comments'] for data in recent_data if 'comments' in data]
recent_count_of_posts_with_no_comments = len(recent_data) - len(comments_threads)
recent_comments_counts = [0] * recent_count_of_posts_with_no_comments
recent_comments_counts += [len(flatten_comments_pages(thread)) for thread in recent_comments_threads]
plt.hist(recent_comments_counts, bins=max(recent_comments_counts))
plt.title("Comments-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____ |
model_notebooks/master-scikit-learn-models.ipynb | ###Markdown
Master Sci-kit Learn Models for Building Energy Modelling- Clayton Miller - [email protected] Kairat TalentbekovThis notebook is main model training/testing notebook based on the prototypes
###Code
import pandas as pd
import os
import numpy as np
from sklearn.metrics import r2_score
from sklearn.model_selection import TimeSeriesSplit
###Output
_____no_output_____
###Markdown
Loading to the main temporal and meta data files
###Code
meta = pd.read_csv("../input/meta_open.csv", index_col='uid', parse_dates=["datastart","dataend"], dayfirst=True)
temporal = pd.read_csv("../input/temp_open_utc_complete.csv", index_col='timestamp', parse_dates=True).tz_localize('utc')
meta.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Index: 507 entries, Office_Abbey to UnivLab_Tracy
Data columns (total 19 columns):
dataend 507 non-null datetime64[ns]
datastart 507 non-null datetime64[ns]
energystarscore 26 non-null float64
heatingtype 124 non-null object
industry 507 non-null object
mainheatingtype 122 non-null object
numberoffloors 124 non-null float64
occupants 105 non-null float64
primaryspaceusage 507 non-null object
rating 131 non-null object
sqft 507 non-null float64
sqm 507 non-null float64
subindustry 507 non-null object
timezone 507 non-null object
yearbuilt 313 non-null object
nickname 507 non-null object
primaryspaceuse_abbrev 507 non-null object
newweatherfilename 507 non-null object
annualschedule 482 non-null object
dtypes: datetime64[ns](2), float64(5), object(12)
memory usage: 79.2+ KB
###Markdown
Regression model library from Scikit-Learn libary
###Code
# All models types
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.dummy import DummyRegressor
from sklearn.tree import ExtraTreeRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import HuberRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.linear_model import RANSACRegressor
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import TheilSenRegressor
# Make array of models. Each model is an array of two elements.
# First element is a model-name, second is a model itself
models = [#['RandomForestRegressor', RandomForestRegressor(n_estimators = 1000, random_state = 42)],
#['AdaBoostRegressor', AdaBoostRegressor(n_estimators = 1000, random_state = 42)],
#['BaggingRegressor', BaggingRegressor(n_estimators = 1000, random_state = 42)],
#['DecisionTreeRegressor', DecisionTreeRegressor(random_state = 42)],
#['DummyRegressor', DummyRegressor()],
#['ExtraTreeRegressor', ExtraTreeRegressor(random_state = 42)],
#['ExtraTreesRegressor', ExtraTreesRegressor(n_estimators = 1000, random_state = 42)],
['GaussianProcessRegressor', GaussianProcessRegressor(random_state = 42)],
# ['GradientBoostingRegressor', GradientBoostingRegressor(n_estimators = 1000, random_state = 42)],
# ['HuberRegressor', HuberRegressor()],
# ['KNeighborsRegressor', KNeighborsRegressor()],
# ['MLPRegressor', MLPRegressor(random_state = 42)],
# ['PassiveAggressiveRegressor', PassiveAggressiveRegressor(random_state = 42)],
# ['RANSACRegressor', RANSACRegressor(random_state = 42)],
# ['SGDRegressor', SGDRegressor(random_state = 42)],
# ['TheilSenRegressor', TheilSenRegressor(random_state = 42)]
]
###Output
_____no_output_____
###Markdown
Functions for loading and processing data
###Code
def load_energy_data(meta, singlebuilding):
# Get Data
single_timezone = meta.T[singlebuilding].timezone
single_start = meta.T[singlebuilding].datastart
single_end = meta.T[singlebuilding].dataend
return pd.DataFrame(temporal[singlebuilding].tz_convert(single_timezone).truncate(before=single_start,after=single_end))
def load_weather_data(meta, singlebuilding, single_building_data, weatherpoint):
# Get weather file
single_timezone = meta.T[singlebuilding].timezone
weatherfilename = meta.T[singlebuilding].newweatherfilename
#print("Weatherfile: "+weatherfilename)
weather = pd.read_csv(os.path.join("../input/",weatherfilename),index_col='timestamp', parse_dates=True, na_values='-9999')
weather = weather.tz_localize(single_timezone, ambiguous = 'infer')
point_data = pd.DataFrame(weather[[col for col in weather.columns if weatherpoint in col]]).resample("H").mean()
return point_data.reindex(pd.DatetimeIndex(start=point_data.index[0], periods=len(single_building_data), freq="H")).fillna(method='ffill').fillna(method='bfill')
def load_seasonal_schedule(meta, singlebuilding, single_building_data):
schedulefilename = meta.T[singlebuilding].annualschedule
single_timezone = meta.T[singlebuilding].timezone
schedule = pd.read_csv(os.path.join("../input/",schedulefilename), header=None, parse_dates=True, index_col=0)
schedule = schedule.tz_localize(single_timezone, ambiguous = 'infer')
schedule.columns = ["seasonal"]
return schedule.reindex(pd.DatetimeIndex(start=schedule.index[0], periods=len(single_building_data), freq="H")).fillna(method='ffill').fillna(method='bfill')
###Output
_____no_output_____
###Markdown
Demo of a single building
###Code
meta.index.get_loc("Office_Pat")
#for building in meta.
singlebuilding = "Office_Benthe"
single_building_data = load_energy_data(meta, singlebuilding)
single_building_data.info()
outdoor_temp = load_weather_data(meta, singlebuilding, single_building_data, "Temperature")
outdoor_humidity = load_weather_data(meta, singlebuilding, single_building_data, "Humidity")
outdoor_humidity.info()
schedule = load_seasonal_schedule(meta, singlebuilding, single_building_data)
schedule.info()
months = np.array([single_building_data.index.month.unique()])[0]
n_splits = 3
tscv = TimeSeriesSplit(n_splits=n_splits)
tscv
for train_index, test_index in tscv.split(months):
month_train, month_test = months[train_index], months[test_index]
print(month_train, month_test)
np.concatenate([months[0:3], months[4:7], months[8:11]])
np.array([months[4], months[7], months[11]])
def create_train_test_indices(months):
train_test_lists = []
#Get time-series split version
n_splits = 3
tscv = TimeSeriesSplit(n_splits=n_splits)
for train_index, test_index in tscv.split(months):
month_train, month_test = months[train_index], months[test_index]
train_test_lists.append([month_train, month_test])
#Add the 'every-fourth-month' version
train_test_lists.append([np.concatenate([months[0:3], months[4:7],
months[8:11]]), np.array([months[4], months[7], months[11]])])
return train_test_lists
train_test_lists = create_train_test_indices(months)
for train_index, test_index in train_test_lists:
print(train_index, test_index)
months = train_index
# data = single_building_data[single_building_data.index.month.isin(months)]
# features = pd.concat((pd.get_dummies(data.index.hour),
# pd.get_dummies(data.index.dayofweek),
# pd.Series(outdoor_temp[outdoor_temp.index.month.isin(months)].TemperatureC.values),
# pd.Series(outdoor_humidity[outdoor_humidity.index.month.isin(months)].Humidity.values),
# pd.get_dummies(schedule[schedule.index.month.isin(months)][1].values)), axis=1)
# #features = features.fillna(method='ffill').fillna(method='bfill')
# #features = np.array(features)
# labels = data[singlebuilding]#.values
#data.info()
#labels
data = single_building_data[single_building_data.index.month.isin(months)][singlebuilding]
data.index.hour
data = pd.merge(pd.DataFrame({"energy":data}), outdoor_temp, right_index=True, left_index=True)
data = pd.merge(data, outdoor_humidity, right_index=True, left_index=True)
data = pd.merge(data, pd.get_dummies(schedule), right_index=True, left_index=True)
# data = pd.merge(data, pd.get_dummies(data.index.hour), right_index=True, left_index=True)
data.columns
def get_features_and_labels(single_building_data, singlebuilding, outdoor_temp, outdoor_humidity, schedule, months):
data = single_building_data[single_building_data.index.month.isin(months)][singlebuilding]
data = pd.merge(pd.DataFrame({"energy":data}), outdoor_temp, right_index=True, left_index=True)
data = pd.merge(data, outdoor_humidity, right_index=True, left_index=True)
data = pd.merge(data, pd.get_dummies(schedule), right_index=True, left_index=True)
features = pd.concat((pd.get_dummies(data.index.hour),
pd.get_dummies(data.index.dayofweek),
data.drop(["energy"], axis=1).reset_index(drop=True)),axis=1)
features = features.fillna(method='ffill').fillna(method='bfill')
labels = data["energy"].values
features = np.array(features)
return features, labels
train_features, train_labels = get_features_and_labels(single_building_data, singlebuilding, outdoor_temp, outdoor_humidity, schedule, train_index)
test_features, test_labels = get_features_and_labels(single_building_data, singlebuilding, outdoor_temp, outdoor_humidity, schedule, test_index)
#train_features.info()
len(train_labels)
testmodel = DummyRegressor()
testmodel.fit(train_features, train_labels)
predictions = testmodel.predict(test_features)
predictions
###Output
_____no_output_____
###Markdown
Loop through
###Code
def createMetrics(modelName, model, buildingnames):
print('\n\n' + modelName + '\n_____________')
# buidingindex
buildingindex = 0
for singlebuilding in buildingnames:
buildingindex+=1
print("Modelling: " + singlebuilding)
# Get energy data
single_building_data = load_energy_data(meta, singlebuilding)
# Get weather and schedules
outdoor_temp = load_weather_data(meta, singlebuilding, single_building_data, "Temperature")
outdoor_humidity = load_weather_data(meta, singlebuilding, single_building_data, "Humidity")
schedule = load_seasonal_schedule(meta, singlebuilding, single_building_data)
# Test/Train cycle
months = np.array([single_building_data.index.month.unique()])[0]
train_test_lists = create_train_test_indices(months)
index = 0
for train_index, test_index in train_test_lists:
# Get Training Data
train_features, train_labels = get_features_and_labels(single_building_data, singlebuilding, outdoor_temp, outdoor_humidity, schedule, train_index)
# Create test data array
test_features, test_labels = get_features_and_labels(single_building_data, singlebuilding, outdoor_temp, outdoor_humidity, schedule, test_index)
# Train the model on training data
mainmodel = model
mainmodel.fit(train_features, train_labels);
# Use the forest's predict method on the test data
predictions = mainmodel.predict(test_features)
# Calculate the absolute errors
errors = abs(predictions - test_labels)
# Calculate mean absolute percentage error (MAPE) and add to list
MAPE = 100 * np.mean((errors / test_labels))
NMBE = 100 * (sum(test_labels - predictions) / (pd.Series(test_labels).count() * np.mean(test_labels)))
CVRSME = 100 * ((sum((test_labels - predictions)**2) / (pd.Series(test_labels).count()-1))**(0.5)) / np.mean(test_labels)
RSQUARED = r2_score(test_labels, predictions)
index+=1
if(buildingindex == 1):
temporary = pd.DataFrame(columns=["building", "MAPE", "NMBE", "CVRSME", "RSQUARED"])
temporary.to_csv('../results/' + modelName + '_metrics_cross_validation_' + str(index) + '.csv', index=False)
# Read dataframe with particular step (cross validation)
metrics_prev = pd.read_csv('../results/' + modelName + '_metrics_cross_validation_' + str(index) + '.csv')
df = pd.DataFrame([[singlebuilding, MAPE, NMBE, CVRSME, RSQUARED]],columns=['building','MAPE','NMBE','CVRSME','RSQUARED'])
# Append new row
metrics = pd.concat([df, metrics_prev])
metrics.to_csv('../results/' + modelName + '_metrics_cross_validation_' + str(index) + '.csv', index=False)
buildingnames = meta.dropna(subset=['annualschedule']).index
buildingnames.get_loc("Office_Pat")
# for singlebuilding in buildingnames[137:]:
# print(singlebuilding)
MAPE_data = {}
RSQUARED_data = {}
NMBE_data = {}
CVRSME_data = {}
for elem in models:
# modelName = elem[0], model = elem[1]
createMetrics(elem[0], elem[1], buildingnames)
###Output
GaussianProcessRegressor
_____________
Modelling: Office_Abbey
Modelling: Office_Abigail
Modelling: Office_Al
Modelling: Office_Alannah
Modelling: Office_Aliyah
Modelling: Office_Allyson
Modelling: Office_Alyson
Modelling: Office_Amelia
Modelling: Office_Amelie
Modelling: Office_Anastasia
Modelling: Office_Andrea
Modelling: Office_Angelica
Modelling: Office_Angelina
Modelling: Office_Angelo
Modelling: Office_Annika
Modelling: Office_Ashanti
Modelling: Office_Asher
Modelling: Office_Aubrey
Modelling: Office_Autumn
Modelling: Office_Ava
Modelling: Office_Ayden
Modelling: Office_Ayesha
Modelling: Office_Benjamin
Modelling: Office_Benthe
Modelling: Office_Bianca
Modelling: Office_Bobbi
Modelling: Office_Brian
Modelling: Office_Bryon
Modelling: Office_Caleb
Modelling: Office_Cameron
Modelling: Office_Carissa
Modelling: Office_Carolina
Modelling: Office_Catherine
Modelling: Office_Cecelia
Modelling: Office_Charles
Modelling: Office_Clarissa
Modelling: Office_Clifton
Modelling: Office_Clinton
Modelling: Office_Cody
Modelling: Office_Colby
Modelling: Office_Conrad
Modelling: Office_Cora
Modelling: Office_Corbin
Modelling: Office_Cristina
Modelling: Office_Curt
Modelling: Office_Dawn
Modelling: Office_Dorian
Modelling: Office_Eddie
Modelling: Office_Eileen
Modelling: Office_Elena
Modelling: Office_Elizabeth
Modelling: Office_Ellie
Modelling: Office_Elliot
Modelling: Office_Ellis
Modelling: Office_Emer
Modelling: Office_Emerald
Modelling: Office_Erik
Modelling: Office_Evelyn
Modelling: Office_Gabriela
Modelling: Office_Garman
Modelling: Office_Garrett
Modelling: Office_Gemma
Modelling: Office_Georgia
Modelling: Office_Gisselle
Modelling: Office_Gladys
Modelling: Office_Glenda
Modelling: Office_Glenn
Modelling: Office_Gloria
Modelling: Office_Guillermo
Modelling: Office_Gustavo
Modelling: Office_Jackie
Modelling: Office_Jackson
Modelling: Office_Jan
Modelling: Office_Javon
Modelling: Office_Jayden
Modelling: Office_Jeanne
Modelling: Office_Jerry
Modelling: Office_Jesus
Modelling: Office_Jett
Modelling: Office_Joan
Modelling: Office_John
Modelling: Office_Joni
Modelling: Office_Jude
Modelling: Office_Lane
Modelling: Office_Leland
Modelling: Office_Lena
Modelling: Office_Lesa
Modelling: Office_Lillian
Modelling: Office_Louise
Modelling: Office_Luann
Modelling: Office_Mada
Modelling: Office_Madeleine
Modelling: Office_Madisyn
Modelling: Office_Malik
Modelling: Office_Marc
Modelling: Office_Marcia
Modelling: Office_Marcus
Modelling: Office_Marianne
Modelling: Office_Marilyn
Modelling: Office_Marion
Modelling: Office_Mark
Modelling: Office_Marla
Modelling: Office_Marlon
Modelling: Office_Martha
Modelling: Office_Martin
Modelling: Office_Marvin
Modelling: Office_Mary
Modelling: Office_Maryann
Modelling: Office_Mason
Modelling: Office_Mat
Modelling: Office_Matthew
Modelling: Office_Max
Modelling: Office_Maximus
Modelling: Office_Maya
Modelling: Office_Megan
Modelling: Office_Melinda
Modelling: Office_Mercedes
Modelling: Office_Michael
Modelling: Office_Micheal
Modelling: Office_Mick
Modelling: Office_Mikayla
Modelling: Office_Milton
Modelling: Office_Mohammed
Modelling: Office_Moises
Modelling: Office_Monty
Modelling: Office_Morgan
Modelling: Office_Moses
Modelling: Office_Muhammad
Modelling: Office_Myron
Modelling: Office_Natasha
Modelling: Office_Nelson
Modelling: Office_Noel
Modelling: Office_Paige
Modelling: Office_Pam
Modelling: Office_Pamela
Modelling: Office_Pasquale
Modelling: Office_Pat
Modelling: Office_Patricia
Modelling: Office_Paula
Modelling: Office_Paulette
Modelling: Office_Paulina
Modelling: Office_Pauline
Modelling: Office_Penny
Modelling: Office_Perla
Modelling: Office_Phebian
Modelling: Office_Precious
Modelling: Office_Scottie
Modelling: Office_Shari
Modelling: Office_Shawnette
Modelling: Office_Shelly
Modelling: Office_Sinead
Modelling: Office_Skyler
Modelling: Office_Stella
Modelling: Office_Terrell
Modelling: Office_Tod
Modelling: Office_Travis
Modelling: PrimClass_Angel
Modelling: PrimClass_Angela
Modelling: PrimClass_Jacob
Modelling: PrimClass_Jacqueline
Modelling: PrimClass_Jacquelyn
Modelling: PrimClass_Jaden
Modelling: PrimClass_Jaiden
Modelling: PrimClass_Jake
Modelling: PrimClass_Jamal
Modelling: PrimClass_Jamie
Modelling: PrimClass_Jane
Modelling: PrimClass_Janelle
Modelling: PrimClass_Janet
Modelling: PrimClass_Janice
Modelling: PrimClass_Janie
Modelling: PrimClass_Janis
Modelling: PrimClass_Janiya
Modelling: PrimClass_Jaqueline
Modelling: PrimClass_Jarrett
Modelling: PrimClass_Jasmine
Modelling: PrimClass_Javier
Modelling: PrimClass_Jaxson
Modelling: PrimClass_Jayda
Modelling: PrimClass_Jayla
Modelling: PrimClass_Jaylin
Modelling: PrimClass_Jaylinn
Modelling: PrimClass_Jayson
Modelling: PrimClass_Jazmin
Modelling: PrimClass_Jazmine
Modelling: PrimClass_Jean
Modelling: PrimClass_Jeanette
Modelling: PrimClass_Jeanine
Modelling: PrimClass_Jeannine
Modelling: PrimClass_Jediah
Modelling: PrimClass_Jeff
Modelling: PrimClass_Jeffery
Modelling: PrimClass_Jeffrey
Modelling: PrimClass_Jenna
Modelling: PrimClass_Jennie
Modelling: PrimClass_Jennifer
Modelling: PrimClass_Jensen
Modelling: PrimClass_Jeremy
Modelling: PrimClass_Jermaine
Modelling: PrimClass_Jerome
Modelling: PrimClass_Jesse
Modelling: PrimClass_Jessie
Modelling: PrimClass_Jill
Modelling: PrimClass_Jim
Modelling: PrimClass_Jimmie
Modelling: PrimClass_Joanna
Modelling: PrimClass_Jocelyn
Modelling: PrimClass_Jodie
Modelling: PrimClass_Jody
Modelling: PrimClass_Joel
Modelling: PrimClass_Joey
Modelling: PrimClass_Johanna
Modelling: PrimClass_Johnathan
Modelling: PrimClass_Johnathon
Modelling: PrimClass_Johnnie
Modelling: PrimClass_Jolie
Modelling: PrimClass_Jon
Modelling: PrimClass_Jonathon
Modelling: PrimClass_Jose
Modelling: PrimClass_Joselyn
Modelling: PrimClass_Josephine
Modelling: PrimClass_Josue
Modelling: PrimClass_Juanita
Modelling: PrimClass_Judith
Modelling: PrimClass_Judy
Modelling: PrimClass_Julian
Modelling: PrimClass_Julianna
Modelling: PrimClass_Julianne
Modelling: PrimClass_Julio
Modelling: PrimClass_Julius
Modelling: PrimClass_Justice
Modelling: PrimClass_Justin
Modelling: PrimClass_Ulysses
Modelling: PrimClass_Uma
Modelling: PrimClass_Umar
Modelling: PrimClass_Uriah
Modelling: UnivClass_Abby
Modelling: UnivClass_Abraham
Modelling: UnivClass_Adrienne
Modelling: UnivClass_Aidan
Modelling: UnivClass_Alec
Modelling: UnivClass_Alejandra
Modelling: UnivClass_Alexander
Modelling: UnivClass_Alexandra
Modelling: UnivClass_Alexandria
Modelling: UnivClass_Alexus
Modelling: UnivClass_Alfredo
Modelling: UnivClass_Alicia
Modelling: UnivClass_Allen
Modelling: UnivClass_Alvin
Modelling: UnivClass_Amari
Modelling: UnivClass_Amya
Modelling: UnivClass_Anamaria
Modelling: UnivClass_Andy
Modelling: UnivClass_Anika
Modelling: UnivClass_Annabella
Modelling: UnivClass_Anne
Modelling: UnivClass_Annmarie
Modelling: UnivClass_Antoinette
Modelling: UnivClass_Anya
Modelling: UnivClass_Aoibhe
Modelling: UnivClass_Archie
Modelling: UnivClass_Ariana
Modelling: UnivClass_Armando
Modelling: UnivClass_Axel
Modelling: UnivClass_Ayanna
Modelling: UnivClass_Beatrice
Modelling: UnivClass_Bob
Modelling: UnivClass_Boyd
Modelling: UnivClass_Brett
Modelling: UnivClass_Caitlyn
Modelling: UnivClass_Calvin
Modelling: UnivClass_Camden
Modelling: UnivClass_Candy
Modelling: UnivClass_Caoimhe
Modelling: UnivClass_Carl
Modelling: UnivClass_Carolyn
Modelling: UnivClass_Cathleen
Modelling: UnivClass_Celia
Modelling: UnivClass_Chandler
Modelling: UnivClass_Charlie
Modelling: UnivClass_Christian
Modelling: UnivClass_Ciara
Modelling: UnivClass_Clay
Modelling: UnivClass_Clifford
Modelling: UnivClass_Colette
Modelling: UnivClass_Conner
Modelling: UnivClass_Conor
Modelling: UnivClass_Craig
Modelling: UnivClass_Jadon
Modelling: UnivClass_Maddison
Modelling: UnivClass_Nash
Modelling: UnivClass_Nathaniel
Modelling: UnivClass_Nayeli
Modelling: UnivClass_Nelly
Modelling: UnivClass_Nicholas
Modelling: UnivClass_Nickolas
Modelling: UnivClass_Nishka
Modelling: UnivClass_Noreen
Modelling: UnivClass_Pandora
Modelling: UnivClass_Pete
Modelling: UnivClass_Peter
Modelling: UnivClass_Philip
Modelling: UnivClass_Phyllis
Modelling: UnivClass_Sam
Modelling: UnivClass_Seb
Modelling: UnivClass_Serenity
Modelling: UnivClass_Shawna
Modelling: UnivClass_Sheila
|
pyttitools-PYTTI.ipynb | ###Markdown
PyTTI-Tools Colab NotebookIf you are using PyTTI-tools from a local jupyter server, you might have a better experience with the "_local" notebook: https://github.com/pytti-tools/pytti-notebook/blob/main/pyttitools-PYTTI_local.ipynbIf you are planning to use google colab with the "local runtime" option: this is still the notebook you want. A very brief history of this notebookThe tools and techniques below were pioneered in 2021 by a diverse and distributed collection of amazingly talented ML practitioners, researchers, and artists. The short version of this history is that Katherine Crowson ([@RiversHaveWings](https://twitter.com/RiversHaveWings)) published a notebook inspired by work done by [@advadnoun](https://twitter.com/advadnoun). Katherine's notebook spawned a litany of variants, each with their own twist on the technique or adding a feature to someone else's work. Henry Rachootin ([@sportsracer48](https://twitter.com/sportsracer48)) collected several of the most interesting notebooks and stuck the important bits together with bublegum and scotch tape. Thus was born PyTTI, and there was much rejoicing in sportsracer48's patreon, where it was shared in closed beta for several months. David Marx ([@DigThatData](https://twitter.com/DigThatData)) offered to help tidy up the mess, and sportsracer48 encouraged him to run wild with it. David's contributions snowballed into [PyTTI-Tools](https://github.com/pytti-tools), the engine this notebook sits on top of!If you would like to contribute, receive support, or even just suggest an improvement to the documentation, our issue tracker can be found here: https://github.com/pytti-tools/pytti-core/issues InstructionsDetailed documentation can be found here: https://pytti-tools.github.io/pytti-book/intro.html* Syntax for text prompts and scenes: https://pytti-tools.github.io/pytti-book/SceneDSL.html* Descriptions of all settings: https://pytti-tools.github.io/pytti-book/Settings.html Step 1: SetupRun the cells in this section once for each runtime, or after a factory reset.
###Code
# This cell should only be run once
drive_mounted = False
gdrive_fpath = '.'
#@title 1.1 Mount google drive (optional)
#@markdown Mounting your drive is optional but recommended. You can even restore from google randomly
#@markdown kicking you out if you mount your drive.
from pathlib import Path
mount_gdrive = False # @param{type:"boolean"}
if mount_gdrive and not drive_mounted:
from google.colab import drive
gdrive_mountpoint = '/content/drive/' #@param{type:"string"}
gdrive_subdirectory = 'MyDrive/pytti_tools' #@param{type:"string"}
gdrive_fpath = str(Path(gdrive_mountpoint) / gdrive_subdirectory)
try:
drive.mount(gdrive_mountpoint, force_remount = True)
!mkdir -p {gdrive_fpath}
%cd {gdrive_fpath}
drive_mounted = True
except OSError:
print(
"\n\n-----[PYTTI-TOOLS]-------\n\n"
"If you received a scary OSError and your drive"
" was already mounted, ignore it."
"\n\n-----[PYTTI-TOOLS]-------\n\n"
)
raise
#@title 1.2 NVIDIA-SMI (optional)
#@markdown View information about your runtime GPU.
#@markdown Google will connect you to an industrial strength GPU, which is needed to run
#@markdown this notebook. You can also disable error checking on your GPU to get some
#@markdown more VRAM, at a marginal cost to stability. You will have to restart the runtime after
#@markdown disabling it.
enable_error_checking = False#@param {type:"boolean"}
if enable_error_checking:
!nvidia-smi
else:
!nvidia-smi
!nvidia-smi -i 0 -e 0
#@title 1.3 Install everything else
#@markdown Run this cell on a fresh runtime to install the libraries and modules.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
def flush_reqs():
!rm -r pytti-core
def install_everything():
if path_exists('./pytti-core'):
try:
flush_reqs()
except Exception as ex:
logger.warning(
str(ex)
)
logger.warning(
"A `pytti` folder already exists and could not be deleted."
"If you encounter problems, try deleting that folder and trying again."
"Please report this and any other issues here: "
"https://github.com/pytti-tools/pytti-notebook/issues/new",
exc_info=True)
!git clone --recurse-submodules -j8 https://github.com/pytti-tools/pytti-core
!pip install kornia pytorch-lightning transformers
!pip install jupyter loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install --upgrade gdown
!pip install ./pytti-core/vendor/AdaBins
!pip install ./pytti-core/vendor/CLIP
!pip install ./pytti-core/vendor/GMA
!pip install ./pytti-core/vendor/taming-transformers
!pip install ./pytti-core
!mkdir -p images_out
!mkdir -p videos
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
try:
from adjustText import adjust_text
import pytti, torch
everything_installed = True
except ModuleNotFoundError:
everything_installed = False
force_install = False #@param{type:"boolean"}
if not everything_installed or force_install:
install_everything()
elif everything_installed:
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
###Output
_____no_output_____
###Markdown
Step 2: Configure ExperimentEdit the parameters, or load saved parameters, then run the model.* https://pytti-tools.github.io/pytti-book/SceneDSL.html* https://pytti-tools.github.io/pytti-book/Settings.html
###Code
#@title #2.1 Parameters:
#@markdown ---
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import glob, json, random, re, math
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
#these are used to make the defaults look pretty
model_default = None
random_seed = None
all = math.inf
derive_from_init_aspect_ratio = -1
def define_parameters():
locals_before = locals().copy()
#@markdown ###Prompts:
scenes = "deep space habitation ring made of glass | galactic nebula | wow! space is full of fractal creatures darting around everywhere like fireflies"#@param{type:"string"}
scene_prefix = "astrophotography #pixelart | image credit nasa | space full of cybernetic neon:3_galactic nebula | isometric pixelart by Sachin Teng | "#@param{type:"string"}
scene_suffix = "| satellite image:-1:-.95 | text:-1:-.95 | anime:-1:-.95 | watermark:-1:-.95 | backyard telescope:-1:-.95 | map:-1:-.95"#@param{type:"string"}
interpolation_steps = 0#@param{type:"number"}
steps_per_scene = 60100#@param{type:"raw"}
#@markdown ---
#@markdown ###Image Prompts:
direct_image_prompts = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Initial image:
init_image = ""#@param{type:"string"}
direct_init_weight = ""#@param{type:"string"}
semantic_init_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Image:
#@markdown Use `image_model` to select how the model will encode the image
image_model = "Limited Palette" #@param ["VQGAN", "Limited Palette", "Unlimited Palette"]
#@markdown image_model | description | strengths | weaknesses
#@markdown --- | -- | -- | --
#@markdown VQGAN | classic VQGAN image | smooth images | limited datasets, slow, VRAM intesnsive
#@markdown Limited Palette | pytti differentiable palette | fast, VRAM scales with `palettes` | pixel images
#@markdown Unlimited Palette | simple RGB optimization | fast, VRAM efficient | pixel images
#@markdown The output image resolution will be `width` $\times$ `pixel_size` by height $\times$ `pixel_size` pixels.
#@markdown The easiest way to run out of VRAM is to select `image_model` VQGAN without reducing
#@markdown `pixel_size` to $1$.
#@markdown For `animation_mode: 3D` the minimum resoultion is about 450 by 400 pixels.
width = 180#@param {type:"raw"}
height = 112#@param {type:"raw"}
pixel_size = 4#@param{type:"number"}
smoothing_weight = 0.02#@param{type:"number"}
#@markdown `VQGAN` specific settings:
vqgan_model = "sflckr" #@param ["imagenet", "coco", "wikiart", "sflckr", "openimages"]
#@markdown `Limited Palette` specific settings:
random_initial_palette = False#@param{type:"boolean"}
palette_size = 6#@param{type:"number"}
palettes = 9#@param{type:"number"}
gamma = 1#@param{type:"number"}
hdr_weight = 0.01#@param{type:"number"}
palette_normalization_weight = 0.2#@param{type:"number"}
show_palette = False #@param{type:"boolean"}
target_palette = ""#@param{type:"string"}
lock_palette = False #@param{type:"boolean"}
#@markdown ---
#@markdown ###Animation:
animation_mode = "3D" #@param ["off","2D", "3D", "Video Source"]
sampling_mode = "bicubic" #@param ["bilinear","nearest","bicubic"]
infill_mode = "wrap" #@param ["mirror","wrap","black","smear"]
pre_animation_steps = 100#@param{type:"number"}
steps_per_frame = 50#@param{type:"number"}
frames_per_second = 12#@param{type:"number"}
#@markdown ---
#@markdown ###Stabilization Weights:
direct_stabilization_weight = ""#@param{type:"string"}
semantic_stabilization_weight = ""#@param{type:"string"}
depth_stabilization_weight = ""#@param{type:"string"}
edge_stabilization_weight = ""#@param{type:"string"}
#@markdown `flow_stabilization_weight` is used for `animation_mode: 3D` and `Video Source`
flow_stabilization_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Video Tracking:
#@markdown Only for `animation_mode: Video Source`.
video_path = ""#@param{type:"string"}
frame_stride = 1#@param{type:"number"}
reencode_each_frame = True #@param{type:"boolean"}
flow_long_term_samples = 1#@param{type:"number"}
#@markdown ---
#@markdown ###Image Motion:
translate_x = "-1700*sin(radians(1.5))" #@param{type:"string"}
translate_y = "0" #@param{type:"string"}
#@markdown `..._3d` is only used in 3D mode.
translate_z_3d = "(50+10*t)*sin(t/10*pi)**2" #@param{type:"string"}
#@markdown `rotate_3d` *must* be a `[w,x,y,z]` rotation (unit) quaternion. Use `rotate_3d: [1,0,0,0]` for no rotation.
#@markdown [Learn more about rotation quaternions here](https://eater.net/quaternions).
rotate_3d = "[cos(radians(1.5)), 0, -sin(radians(1.5))/sqrt(2), sin(radians(1.5))/sqrt(2)]"#@param{type:"string"}
#@markdown `..._2d` is only used in 2D mode.
rotate_2d = "5" #@param{type:"string"}
zoom_x_2d = "0" #@param{type:"string"}
zoom_y_2d = "0" #@param{type:"string"}
#@markdown 3D camera (only used in 3D mode):
lock_camera = True#@param{type:"boolean"}
field_of_view = 60#@param{type:"number"}
near_plane = 1#@param{type:"number"}
far_plane = 10000#@param{type:"number"}
#@markdown ---
#@markdown ###Output:
file_namespace = "default"#@param{type:"string"}
if file_namespace == '':
file_namespace = 'out'
allow_overwrite = False#@param{type:"boolean"}
base_name = file_namespace
if not allow_overwrite and path_exists(f'images_out/{file_namespace}'):
_, i = get_last_file(f'images_out/{file_namespace}',
f'^(?P<pre>{re.escape(file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
if i == 0:
print(f"WARNING: file_namespace {file_namespace} already has images from run 0")
elif i is not None:
print(f"WARNING: file_namespace {file_namespace} already has images from runs 0 through {i}")
elif glob.glob(f'images_out/{file_namespace}/{base_name}_*.png'):
print(f"WARNING: file_namespace {file_namespace} has images which will be overwritten")
try:
del i
del _
except NameError:
pass
del base_name
display_every = steps_per_frame #@param{type:"raw"}
clear_every = 0 #@param{type:"raw"}
display_scale = 1#@param{type:"number"}
save_every = steps_per_frame #@param{type:"raw"}
backups = 2**(flow_long_term_samples+1)+1#this is used for video transfer, so don't lower it if that's what you're doing#@param {type:"raw"}
show_graphs = False #@param{type:"boolean"}
approximate_vram_usage = False#@param{type:"boolean"}
#@markdown ---
#@markdown ###Model:
#@markdown Quality settings from Dribnet's CLIPIT (https://github.com/dribnet/clipit).
#@markdown Selecting too many will use up all your VRAM and slow down the model.
#@markdown I usually use ViTB32, ViTB16, and RN50 if I get a A100, otherwise I just use ViT32B.
#@markdown quality | CLIP models
#@markdown --- | --
#@markdown draft | ViTB32
#@markdown normal | ViTB32, ViTB16
#@markdown high | ViTB32, ViTB16, RN50
#@markdown best | ViTB32, ViTB16, RN50x4
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
RN50 = False #@param{type:"boolean"}
RN50x4 = False #@param{type:"boolean"}
ViTL14 = False #@param{type:"boolean"}
RN101 = False #@param{type:"boolean"}
RN50x16 = False #@param{type:"boolean"}
RN50x64 = False #@param{type:"boolean"}
#@markdown the default learning rate is `0.1` for all the VQGAN models
#@markdown except openimages, which is `0.15`. For the palette modes the
#@markdown default is `0.02`.
learning_rate = model_default#@param{type:"raw"}
reset_lr_each_frame = True#@param{type:"boolean"}
seed = random_seed #@param{type:"raw"}
#@markdown **Cutouts**:
#@markdown [Cutouts are how CLIP sees the image.](https://twitter.com/remi_durant/status/1460607677801897990)
cutouts = 40#@param{type:"number"}
cut_pow = 2#@param {type:"number"}
cutout_border = .25#@param {type:"number"}
gradient_accumulation_steps = 1 #@param {type:"number"}
#@markdown NOTE: prompt masks (`promt:weight_[mask.png]`) will not work right on '`wrap`' or '`mirror`' mode.
border_mode = "clamp" #@param ["clamp","mirror","wrap","black","smear"]
models_parent_dir = '.'
if seed is None:
seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
params = Bunch(define_parameters())
print("SETTINGS:")
print(json.dumps(params))
#@title 2.2 Load settings (optional)
#@markdown copy the `SETTINGS:` output from the **Parameters** cell (tripple click to select the whole
#@markdown line from `{'scenes'...` to `}`) and paste them in a note to save them for later.
#@markdown Paste them here in the future to load those settings again. Running this cell with blank settings won't do anything.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import *
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import json, random
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
settings = ""#@param{type:"string"}
#@markdown Check `random_seed` to overwrite the seed from the settings with a random one for some variation.
random_seed = False #@param{type:"boolean"}
if settings != '':
params = load_settings(settings, random_seed)
from pytti.workhorse import TB_LOGDIR
%load_ext tensorboard
%tensorboard --logdir $TB_LOGDIR
###Output
_____no_output_____
###Markdown
It is common for users to experience issues starting their first run. In particular, you may see an error saying something like "Access Denied" and showing you some URL links. This is caused by the google drive link for one of the models getting "hugged to death". You can still access the model, but google won't let you do it programmatically. Please follow these steps to get around the issue:1. Visit either of the two URLs you see in your browser to download the file `AdaBins_nyu.pt` locally2. Create a new folder in colab named `pretrained` (check the left sidebar for a file browser)3. Upload `AdaBins_nyu.pt` to the `pretrained` folder. You should be able to just drag-and-drop the file onto the folder.4. Run the following code cell after the upload has completed to tell PyTTI where to find AdaBinsYou should now be able to run image generation without issues.
###Code
%%sh
ADABINS_SRC=./pretrained/AdaBins_nyu.pt
ADABINS_DIR=~/.cache/adabins
ADABINS_TGT=$ADABINS_DIR/AdaBins_nyu.pt
if [ -f "$ADABINS_SRC" ]; then
mkdir -p $ADABINS_DIR/
ln $ADABINS_SRC $ADABINS_TGT
fi
#@title 2.3 Run it!
from pytti.workhorse import _main as render_frames
from omegaconf import OmegaConf
cfg = OmegaConf.create(dict(params))
# function wraps step 2.3 of the original p5 notebook
render_frames(cfg)
###Output
_____no_output_____
###Markdown
Step 3: Render videoYou can dowload from the notebook, but it's faster to download from your drive.
###Code
#@title 3.1 Render video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest#@param{type:"raw"}
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
tqdm.write(f'Generating video from {params.file_namespace}/{base_name}_*.png')
all_frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
all_frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
print(f'found {len(all_frames)} frames matching images_out/{params.file_namespace}/{base_name}_*.png')
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.1 Render video (concatenate all runs)
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
all_frames = []
for i in range(run_number+1):
base_name = params.file_namespace if i == 0 else (params.file_namespace+f"({i})")
frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
all_frames.extend(frames)
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.2 Download the last exported video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
try:
from pytti.Notebook import get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
try:
params
except NameError:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError("ERROR: please run parameters (step 2.1).")
from google.colab import files
try:
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
filename = f'{base_name}.mp4'
except NameError:
filename, i = get_last_file(f'videos',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?\\.mp4)$')
if path_exists(f'videos/{filename}'):
files.download(f"videos/{filename}")
else:
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: video videos/{filename} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: video videos/{filename} does not exist.")
###Output
_____no_output_____
###Markdown
Batch SettingsBe Advised: google may penalize you for sustained colab GPU utilization, even if you are a PRO+ subscriber. Tread lightly with batch runs, you don't wanna end up in GPU jail. FYI: the batch setting feature below may not work at present. We recommend using the CLI for batch jobs, see usage instructions at https://github.com/pytti-tools/pytti-core . The code below will probably be removed in the near future. Batch SetingsWARNING: If you use google colab (even with pro and pro+) GPUs for long enought google will throttle your account. Be careful with batch runs if you don't want to get kicked.
###Code
#@title batch settings
# ngl... this probably doesn't work right now.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, save_batch
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
change_tqdm_color()
try:
import exrex, random, glob
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
from numpy import arange
import itertools
def all_matches(s):
return list(exrex.generate(s))
def dict_product(dictionary):
return [dict(zip(dictionary, x)) for x in itertools.product(*dictionary.values())]
#these are used to make the defaults look pretty
model_default = None
random_seed = None
def define_parameters():
locals_before = locals().copy()
scenes = ["list","your","runs"] #@param{type:"raw"}
scene_prefix = ["all "," permutations "," are run "] #@param{type:"raw"}
scene_suffix = [" that", " makes", " 27" ] #@param{type:"raw"}
interpolation_steps = [0] #@param{type:"raw"}
steps_per_scene = [300] #@param{type:"raw"}
direct_image_prompts = [""] #@param{type:"raw"}
init_image = [""] #@param{type:"raw"}
direct_init_weight = [""] #@param{type:"raw"}
semantic_init_weight = [""] #@param{type:"raw"}
image_model = ["Limited Palette"] #@param{type:"raw"}
width = [180] #@param{type:"raw"}
height = [112] #@param{type:"raw"}
pixel_size = [4] #@param{type:"raw"}
smoothing_weight = [0.05] #@param{type:"raw"}
vqgan_model = ["sflckr"] #@param{type:"raw"}
random_initial_palette = [False] #@param{type:"raw"}
palette_size = [9] #@param{type:"raw"}
palettes = [8] #@param{type:"raw"}
gamma = [1] #@param{type:"raw"}
hdr_weight = [1.0] #@param{type:"raw"}
palette_normalization_weight = [1.0] #@param{type:"raw"}
show_palette = [False] #@param{type:"raw"}
target_palette = [""] #@param{type:"raw"}
lock_palette = [False] #@param{type:"raw"}
animation_mode = ["off"] #@param{type:"raw"}
sampling_mode = ["bicubic"] #@param{type:"raw"}
infill_mode = ["wrap"] #@param{type:"raw"}
pre_animation_steps = [100] #@param{type:"raw"}
steps_per_frame = [50] #@param{type:"raw"}
frames_per_second = [12] #@param{type:"raw"}
direct_stabilization_weight = [""] #@param{type:"raw"}
semantic_stabilization_weight = [""] #@param{type:"raw"}
depth_stabilization_weight = [""] #@param{type:"raw"}
edge_stabilization_weight = [""] #@param{type:"raw"}
flow_stabilization_weight = [""] #@param{type:"raw"}
video_path = [""] #@param{type:"raw"}
frame_stride = [1] #@param{type:"raw"}
reencode_each_frame = [True] #@param{type:"raw"}
flow_long_term_samples = [0] #@param{type:"raw"}
translate_x = ["0"] #@param{type:"raw"}
translate_y = ["0"] #@param{type:"raw"}
translate_z_3d = ["0"] #@param{type:"raw"}
rotate_3d = ["[1,0,0,0]"] #@param{type:"raw"}
rotate_2d = ["0"] #@param{type:"raw"}
zoom_x_2d = ["0"] #@param{type:"raw"}
zoom_y_2d = ["0"] #@param{type:"raw"}
lock_camera = [True] #@param{type:"raw"}
field_of_view = [60] #@param{type:"raw"}
near_plane = [1] #@param{type:"raw"}
far_plane = [10000] #@param{type:"raw"}
file_namespace = ["Basic Batch"] #@param{type:"raw"}
allow_overwrite = [False]
display_every = [50] #@param{type:"raw"}
clear_every = [0] #@param{type:"raw"}
display_scale = [1] #@param{type:"raw"}
save_every = [50] #@param{type:"raw"}
backups = [2] #@param{type:"raw"}
show_graphs = [False] #@param{type:"raw"}
approximate_vram_usage = [False] #@param{type:"raw"}
ViTB32 = [True] #@param{type:"raw"}
ViTB16 = [False] #@param{type:"raw"}
RN50 = [False] #@param{type:"raw"}
RN50x4 = [False] #@param{type:"raw"}
learning_rate = [None] #@param{type:"raw"}
reset_lr_each_frame = [True] #@param{type:"raw"}
seed = [None] #@param{type:"raw"}
cutouts = [40] #@param{type:"raw"}
cut_pow = [2] #@param{type:"raw"}
cutout_border = [0.25] #@param{type:"raw"}
border_mode = ["clamp"] #@param{type:"raw"}
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
param_dict = define_parameters()
batch_list = dict_product(param_dict)
namespace = batch_list[0]['file_namespace']
if glob.glob(f'images_out/{namespace}/*.png'):
print(f"WARNING: images_out/{namespace} contains images. Batch indicies may not match filenames unless restoring.")
# @title Licensed under the MIT License
# Copyleft (c) 2021 Henry Rachootin
# Copyright (c) 2022 David Marx
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###Output
_____no_output_____
###Markdown
PyTTI-Tools Colab NotebookIf you are using PyTTI-tools from a local jupyter server, you might have a better experience with the "_local" notebook: https://github.com/pytti-tools/pytti-notebook/blob/main/pyttitools-PYTTI_local.ipynbIf you are planning to use google colab with the "local runtime" option: this is still the notebook you want. A very brief history of this notebookThe tools and techniques below were pioneered in 2021 by a diverse and distributed collection of amazingly talented ML practitioners, researchers, and artists. The short version of this history is that Katherine Crowson ([@RiversHaveWings](https://twitter.com/RiversHaveWings)) published a notebook inspired by work done by [@advadnoun](https://twitter.com/advadnoun). Katherine's notebook spawned a litany of variants, each with their own twist on the technique or adding a feature to someone else's work. Henry Rachootin ([@sportsracer48](https://twitter.com/sportsracer48)) collected several of the most interesting notebooks and stuck the important bits together with bublegum and scotch tape. Thus was born PyTTI, and there was much rejoicing in sportsracer48's patreon, where it was shared in closed beta for several months. David Marx ([@DigThatData](https://twitter.com/DigThatData)) offered to help tidy up the mess, and sportsracer48 encouraged him to run wild with it. David's contributions snowballed into [PyTTI-Tools](https://github.com/pytti-tools), the engine this notebook sits on top of!If you would like to contribute, receive support, or even just suggest an improvement to the documentation, our issue tracker can be found here: https://github.com/pytti-tools/pytti-core/issues InstructionsDetailed documentation can be found here: https://pytti-tools.github.io/pytti-book/intro.html* Syntax for text prompts and scenes: https://pytti-tools.github.io/pytti-book/SceneDSL.html* Descriptions of all settings: https://pytti-tools.github.io/pytti-book/Settings.html Step 1: SetupRun the cells in this section once for each runtime, or after a factory reset.
###Code
# This cell should only be run once
drive_mounted = False
gdrive_fpath = '.'
#@title 1.1 Mount google drive (optional)
#@markdown Mounting your drive is optional but recommended. You can even restore from google randomly
#@markdown kicking you out if you mount your drive.
from pathlib import Path
mount_gdrive = False # @param{type:"boolean"}
if mount_gdrive and not drive_mounted:
from google.colab import drive
gdrive_mountpoint = '/content/drive/' #@param{type:"string"}
gdrive_subdirectory = 'MyDrive/pytti_tools' #@param{type:"string"}
gdrive_fpath = str(Path(gdrive_mountpoint) / gdrive_subdirectory)
try:
drive.mount(gdrive_mountpoint, force_remount = True)
!mkdir -p {gdrive_fpath}
%cd {gdrive_fpath}
drive_mounted = True
except OSError:
print(
"\n\n-----[PYTTI-TOOLS]-------\n\n"
"If you received a scary OSError and your drive"
" was already mounted, ignore it."
"\n\n-----[PYTTI-TOOLS]-------\n\n"
)
raise
#@title 1.2 NVIDIA-SMI (optional)
#@markdown View information about your runtime GPU.
#@markdown Google will connect you to an industrial strength GPU, which is needed to run
#@markdown this notebook. You can also disable error checking on your GPU to get some
#@markdown more VRAM, at a marginal cost to stability. You will have to restart the runtime after
#@markdown disabling it.
enable_error_checking = False#@param {type:"boolean"}
if enable_error_checking:
!nvidia-smi
else:
!nvidia-smi
!nvidia-smi -i 0 -e 0
#@title 1.3 Install everything else
#@markdown Run this cell on a fresh runtime to install the libraries and modules.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
def flush_reqs():
!rm -r pytti-core
def install_everything():
if path_exists('./pytti-core'):
try:
flush_reqs()
except Exception as ex:
logger.warning(
str(ex)
)
logger.warning(
"A `pytti` folder already exists and could not be deleted."
"If you encounter problems, try deleting that folder and trying again."
"Please report this and any other issues here: "
"https://github.com/pytti-tools/pytti-notebook/issues/new",
exc_info=True)
!git clone --recurse-submodules -j8 https://github.com/pytti-tools/pytti-core
!pip install kornia pytorch-lightning transformers
!pip install jupyter loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install --upgrade gdown
!pip install ./pytti-core/vendor/AdaBins
!pip install ./pytti-core/vendor/CLIP
!pip install ./pytti-core/vendor/GMA
!pip install ./pytti-core/vendor/taming-transformers
!pip install ./pytti-core
!mkdir -p images_out
!mkdir -p videos
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
try:
from adjustText import adjust_text
import pytti, torch
everything_installed = True
except ModuleNotFoundError:
everything_installed = False
force_install = False #@param{type:"boolean"}
if not everything_installed or force_install:
install_everything()
elif everything_installed:
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
###Output
_____no_output_____
###Markdown
Step 2: Configure ExperimentEdit the parameters, or load saved parameters, then run the model.* https://pytti-tools.github.io/pytti-book/SceneDSL.html* https://pytti-tools.github.io/pytti-book/Settings.html
###Code
#@title #2.1 Parameters:
#@markdown ---
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {{gdrive_fpath}}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import glob, json, random, re, math
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
#these are used to make the defaults look pretty
model_default = None
random_seed = None
all = math.inf
derive_from_init_aspect_ratio = -1
def define_parameters():
locals_before = locals().copy()
#@markdown ###Prompts:
scenes = "deep space habitation ring made of glass | galactic nebula | wow! space is full of fractal creatures darting around everywhere like fireflies"#@param{type:"string"}
scene_prefix = "astrophotography #pixelart | image credit nasa | space full of cybernetic neon:3_galactic nebula | isometric pixelart by Sachin Teng | "#@param{type:"string"}
scene_suffix = "| satellite image:-1:-.95 | text:-1:-.95 | anime:-1:-.95 | watermark:-1:-.95 | backyard telescope:-1:-.95 | map:-1:-.95"#@param{type:"string"}
interpolation_steps = 0#@param{type:"number"}
steps_per_scene = 60100#@param{type:"raw"}
#@markdown ---
#@markdown ###Image Prompts:
direct_image_prompts = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Initial image:
init_image = ""#@param{type:"string"}
direct_init_weight = ""#@param{type:"string"}
semantic_init_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Image:
#@markdown Use `image_model` to select how the model will encode the image
image_model = "Limited Palette" #@param ["VQGAN", "Limited Palette", "Unlimited Palette"]
#@markdown image_model | description | strengths | weaknesses
#@markdown --- | -- | -- | --
#@markdown VQGAN | classic VQGAN image | smooth images | limited datasets, slow, VRAM intesnsive
#@markdown Limited Palette | pytti differentiable palette | fast, VRAM scales with `palettes` | pixel images
#@markdown Unlimited Palette | simple RGB optimization | fast, VRAM efficient | pixel images
#@markdown The output image resolution will be `width` $\times$ `pixel_size` by height $\times$ `pixel_size` pixels.
#@markdown The easiest way to run out of VRAM is to select `image_model` VQGAN without reducing
#@markdown `pixel_size` to $1$.
#@markdown For `animation_mode: 3D` the minimum resoultion is about 450 by 400 pixels.
width = 180#@param {type:"raw"}
height = 112#@param {type:"raw"}
pixel_size = 4#@param{type:"number"}
smoothing_weight = 0.02#@param{type:"number"}
#@markdown `VQGAN` specific settings:
vqgan_model = "sflckr" #@param ["imagenet", "coco", "wikiart", "sflckr", "openimages"]
#@markdown `Limited Palette` specific settings:
random_initial_palette = False#@param{type:"boolean"}
palette_size = 6#@param{type:"number"}
palettes = 9#@param{type:"number"}
gamma = 1#@param{type:"number"}
hdr_weight = 0.01#@param{type:"number"}
palette_normalization_weight = 0.2#@param{type:"number"}
show_palette = False #@param{type:"boolean"}
target_palette = ""#@param{type:"string"}
lock_palette = False #@param{type:"boolean"}
#@markdown ---
#@markdown ###Animation:
animation_mode = "3D" #@param ["off","2D", "3D", "Video Source"]
sampling_mode = "bicubic" #@param ["bilinear","nearest","bicubic"]
infill_mode = "wrap" #@param ["mirror","wrap","black","smear"]
pre_animation_steps = 100#@param{type:"number"}
steps_per_frame = 50#@param{type:"number"}
frames_per_second = 12#@param{type:"number"}
#@markdown ---
#@markdown ###Stabilization Weights:
direct_stabilization_weight = ""#@param{type:"string"}
semantic_stabilization_weight = ""#@param{type:"string"}
depth_stabilization_weight = ""#@param{type:"string"}
edge_stabilization_weight = ""#@param{type:"string"}
#@markdown `flow_stabilization_weight` is used for `animation_mode: 3D` and `Video Source`
flow_stabilization_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Video Tracking:
#@markdown Only for `animation_mode: Video Source`.
video_path = ""#@param{type:"string"}
frame_stride = 1#@param{type:"number"}
reencode_each_frame = True #@param{type:"boolean"}
flow_long_term_samples = 1#@param{type:"number"}
#@markdown ---
#@markdown ###Image Motion:
translate_x = "-1700*sin(radians(1.5))" #@param{type:"string"}
translate_y = "0" #@param{type:"string"}
#@markdown `..._3d` is only used in 3D mode.
translate_z_3d = "(50+10*t)*sin(t/10*pi)**2" #@param{type:"string"}
#@markdown `rotate_3d` *must* be a `[w,x,y,z]` rotation (unit) quaternion. Use `rotate_3d: [1,0,0,0]` for no rotation.
#@markdown [Learn more about rotation quaternions here](https://eater.net/quaternions).
rotate_3d = "[cos(radians(1.5)), 0, -sin(radians(1.5))/sqrt(2), sin(radians(1.5))/sqrt(2)]"#@param{type:"string"}
#@markdown `..._2d` is only used in 2D mode.
rotate_2d = "5" #@param{type:"string"}
zoom_x_2d = "0" #@param{type:"string"}
zoom_y_2d = "0" #@param{type:"string"}
#@markdown 3D camera (only used in 3D mode):
lock_camera = True#@param{type:"boolean"}
field_of_view = 60#@param{type:"number"}
near_plane = 1#@param{type:"number"}
far_plane = 10000#@param{type:"number"}
#@markdown ---
#@markdown ###Output:
file_namespace = "default"#@param{type:"string"}
if file_namespace == '':
file_namespace = 'out'
allow_overwrite = False#@param{type:"boolean"}
base_name = file_namespace
if not allow_overwrite and path_exists(f'images_out/{file_namespace}'):
_, i = get_last_file(f'images_out/{file_namespace}',
f'^(?P<pre>{re.escape(file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
if i == 0:
print(f"WARNING: file_namespace {file_namespace} already has images from run 0")
elif i is not None:
print(f"WARNING: file_namespace {file_namespace} already has images from runs 0 through {i}")
elif glob.glob(f'images_out/{file_namespace}/{base_name}_*.png'):
print(f"WARNING: file_namespace {file_namespace} has images which will be overwritten")
try:
del i
del _
except NameError:
pass
del base_name
display_every = steps_per_frame #@param{type:"raw"}
clear_every = 0 #@param{type:"raw"}
display_scale = 1#@param{type:"number"}
save_every = steps_per_frame #@param{type:"raw"}
backups = 2**(flow_long_term_samples+1)+1#this is used for video transfer, so don't lower it if that's what you're doing#@param {type:"raw"}
show_graphs = False #@param{type:"boolean"}
approximate_vram_usage = False#@param{type:"boolean"}
#@markdown ---
#@markdown ###Model:
#@markdown Quality settings from Dribnet's CLIPIT (https://github.com/dribnet/clipit).
#@markdown Selecting too many will use up all your VRAM and slow down the model.
#@markdown I usually use ViTB32, ViTB16, and RN50 if I get a A100, otherwise I just use ViT32B.
#@markdown quality | CLIP models
#@markdown --- | --
#@markdown draft | ViTB32
#@markdown normal | ViTB32, ViTB16
#@markdown high | ViTB32, ViTB16, RN50
#@markdown best | ViTB32, ViTB16, RN50x4
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
RN50 = False #@param{type:"boolean"}
RN50x4 = False #@param{type:"boolean"}
ViTL14 = False #@param{type:"boolean"}
RN101 = False #@param{type:"boolean"}
RN50x16 = False #@param{type:"boolean"}
RN50x64 = False #@param{type:"boolean"}
#@markdown the default learning rate is `0.1` for all the VQGAN models
#@markdown except openimages, which is `0.15`. For the palette modes the
#@markdown default is `0.02`.
learning_rate = model_default#@param{type:"raw"}
reset_lr_each_frame = True#@param{type:"boolean"}
seed = random_seed #@param{type:"raw"}
#@markdown **Cutouts**:
#@markdown [Cutouts are how CLIP sees the image.](https://twitter.com/remi_durant/status/1460607677801897990)
cutouts = 40#@param{type:"number"}
cut_pow = 2#@param {type:"number"}
cutout_border = .25#@param {type:"number"}
gradient_accumulation_steps = 1 #@param {type:"number"}
#@markdown NOTE: prompt masks (`promt:weight_[mask.png]`) will not work right on '`wrap`' or '`mirror`' mode.
border_mode = "clamp" #@param ["clamp","mirror","wrap","black","smear"]
models_parent_dir = '.'
if seed is None:
seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
params = Bunch(define_parameters())
print("SETTINGS:")
print(json.dumps(params))
#@title 2.2 Load settings (optional)
#@markdown copy the `SETTINGS:` output from the **Parameters** cell (tripple click to select the whole
#@markdown line from `{'scenes'...` to `}`) and paste them in a note to save them for later.
#@markdown Paste them here in the future to load those settings again. Running this cell with blank settings won't do anything.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import *
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import json, random
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
settings = ""#@param{type:"string"}
#@markdown Check `random_seed` to overwrite the seed from the settings with a random one for some variation.
random_seed = False #@param{type:"boolean"}
if settings != '':
params = load_settings(settings, random_seed)
from pytti.workhorse import TB_LOGDIR
%load_ext tensorboard
%tensorboard --logdir $TB_LOGDIR
###Output
_____no_output_____
###Markdown
It is common for users to experience issues starting their first run. In particular, you may see an error saying something like "Access Denied" and showing you some URL links. This is caused by the google drive link for one of the models getting "hugged to death". You can still access the model, but google won't let you do it programmatically. Please follow these steps to get around the issue:1. Visit either of the two URLs you see in your browser to download the file `AdaBins_nyu.pt` locally2. Create a new folder in colab named `pretrained` (check the left sidebar for a file browser)3. Upload `AdaBins_nyu.pt` to the `pretrained` folder. You should be able to just drag-and-drop the file onto the folder.4. Run the following code cell after the upload has completed to tell PyTTI where to find AdaBinsYou should now be able to run image generation without issues.
###Code
%%sh
ADABINS_SRC=./pretrained/AdaBins_nyu.pt
ADABINS_DIR=~/.cache/adabins
ADABINS_TGT=$ADABINS_DIR/AdaBins_nyu.pt
if [ -f "$ADABINS_SRC" ]; then
mkdir -p $ADABINS_DIR/
ln $ADABINS_SRC $ADABINS_TGT
fi
#@title 2.3 Run it!
from pytti.workhorse import _main as render_frames
from omegaconf import OmegaConf
cfg = OmegaConf.create(dict(params))
# function wraps step 2.3 of the original p5 notebook
render_frames(cfg)
###Output
_____no_output_____
###Markdown
Step 3: Render videoYou can dowload from the notebook, but it's faster to download from your drive.
###Code
#@title 3.1 Render video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest#@param{type:"raw"}
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
tqdm.write(f'Generating video from {params.file_namespace}/{base_name}_*.png')
all_frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
all_frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
print(f'found {len(all_frames)} frames matching images_out/{params.file_namespace}/{base_name}_*.png')
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.1 Render video (concatenate all runs)
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
all_frames = []
for i in range(run_number+1):
base_name = params.file_namespace if i == 0 else (params.file_namespace+f"({i})")
frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
all_frames.extend(frames)
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.2 Download the last exported video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
try:
from pytti.Notebook import get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
try:
params
except NameError:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError("ERROR: please run parameters (step 2.1).")
from google.colab import files
try:
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
filename = f'{base_name}.mp4'
except NameError:
filename, i = get_last_file(f'videos',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?\\.mp4)$')
if path_exists(f'videos/{filename}'):
files.download(f"videos/{filename}")
else:
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: video videos/{filename} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: video videos/{filename} does not exist.")
###Output
_____no_output_____
###Markdown
Batch SettingsBe Advised: google may penalize you for sustained colab GPU utilization, even if you are a PRO+ subscriber. Tread lightly with batch runs, you don't wanna end up in GPU jail. FYI: the batch setting feature below may not work at present. We recommend using the CLI for batch jobs, see usage instructions at https://github.com/pytti-tools/pytti-core . The code below will probably be removed in the near future. Batch SetingsWARNING: If you use google colab (even with pro and pro+) GPUs for long enought google will throttle your account. Be careful with batch runs if you don't want to get kicked.
###Code
#@title batch settings
# ngl... this probably doesn't work right now.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, save_batch
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
change_tqdm_color()
try:
import exrex, random, glob
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
from numpy import arange
import itertools
def all_matches(s):
return list(exrex.generate(s))
def dict_product(dictionary):
return [dict(zip(dictionary, x)) for x in itertools.product(*dictionary.values())]
#these are used to make the defaults look pretty
model_default = None
random_seed = None
def define_parameters():
locals_before = locals().copy()
scenes = ["list","your","runs"] #@param{type:"raw"}
scene_prefix = ["all "," permutations "," are run "] #@param{type:"raw"}
scene_suffix = [" that", " makes", " 27" ] #@param{type:"raw"}
interpolation_steps = [0] #@param{type:"raw"}
steps_per_scene = [300] #@param{type:"raw"}
direct_image_prompts = [""] #@param{type:"raw"}
init_image = [""] #@param{type:"raw"}
direct_init_weight = [""] #@param{type:"raw"}
semantic_init_weight = [""] #@param{type:"raw"}
image_model = ["Limited Palette"] #@param{type:"raw"}
width = [180] #@param{type:"raw"}
height = [112] #@param{type:"raw"}
pixel_size = [4] #@param{type:"raw"}
smoothing_weight = [0.05] #@param{type:"raw"}
vqgan_model = ["sflckr"] #@param{type:"raw"}
random_initial_palette = [False] #@param{type:"raw"}
palette_size = [9] #@param{type:"raw"}
palettes = [8] #@param{type:"raw"}
gamma = [1] #@param{type:"raw"}
hdr_weight = [1.0] #@param{type:"raw"}
palette_normalization_weight = [1.0] #@param{type:"raw"}
show_palette = [False] #@param{type:"raw"}
target_palette = [""] #@param{type:"raw"}
lock_palette = [False] #@param{type:"raw"}
animation_mode = ["off"] #@param{type:"raw"}
sampling_mode = ["bicubic"] #@param{type:"raw"}
infill_mode = ["wrap"] #@param{type:"raw"}
pre_animation_steps = [100] #@param{type:"raw"}
steps_per_frame = [50] #@param{type:"raw"}
frames_per_second = [12] #@param{type:"raw"}
direct_stabilization_weight = [""] #@param{type:"raw"}
semantic_stabilization_weight = [""] #@param{type:"raw"}
depth_stabilization_weight = [""] #@param{type:"raw"}
edge_stabilization_weight = [""] #@param{type:"raw"}
flow_stabilization_weight = [""] #@param{type:"raw"}
video_path = [""] #@param{type:"raw"}
frame_stride = [1] #@param{type:"raw"}
reencode_each_frame = [True] #@param{type:"raw"}
flow_long_term_samples = [0] #@param{type:"raw"}
translate_x = ["0"] #@param{type:"raw"}
translate_y = ["0"] #@param{type:"raw"}
translate_z_3d = ["0"] #@param{type:"raw"}
rotate_3d = ["[1,0,0,0]"] #@param{type:"raw"}
rotate_2d = ["0"] #@param{type:"raw"}
zoom_x_2d = ["0"] #@param{type:"raw"}
zoom_y_2d = ["0"] #@param{type:"raw"}
lock_camera = [True] #@param{type:"raw"}
field_of_view = [60] #@param{type:"raw"}
near_plane = [1] #@param{type:"raw"}
far_plane = [10000] #@param{type:"raw"}
file_namespace = ["Basic Batch"] #@param{type:"raw"}
allow_overwrite = [False]
display_every = [50] #@param{type:"raw"}
clear_every = [0] #@param{type:"raw"}
display_scale = [1] #@param{type:"raw"}
save_every = [50] #@param{type:"raw"}
backups = [2] #@param{type:"raw"}
show_graphs = [False] #@param{type:"raw"}
approximate_vram_usage = [False] #@param{type:"raw"}
ViTB32 = [True] #@param{type:"raw"}
ViTB16 = [False] #@param{type:"raw"}
RN50 = [False] #@param{type:"raw"}
RN50x4 = [False] #@param{type:"raw"}
learning_rate = [None] #@param{type:"raw"}
reset_lr_each_frame = [True] #@param{type:"raw"}
seed = [None] #@param{type:"raw"}
cutouts = [40] #@param{type:"raw"}
cut_pow = [2] #@param{type:"raw"}
cutout_border = [0.25] #@param{type:"raw"}
border_mode = ["clamp"] #@param{type:"raw"}
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
param_dict = define_parameters()
batch_list = dict_product(param_dict)
namespace = batch_list[0]['file_namespace']
if glob.glob(f'images_out/{namespace}/*.png'):
print(f"WARNING: images_out/{namespace} contains images. Batch indicies may not match filenames unless restoring.")
# @title Licensed under the MIT License
# Copyleft (c) 2021 Henry Rachootin
# Copyright (c) 2022 David Marx
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###Output
_____no_output_____
###Markdown
PYTTI-TOOLS! A brief history of this notebookThe tools and techniques below were pioneered in 2021 by a diverse and distributed collection of amazingly talented ML practitioners, researchers, and artists. The short version of this history is that Katherine Crowson ([@RiversHaveWings](https://twitter.com/RiversHaveWings)) published a notebook inspired by work done by [@advadnoun](https://twitter.com/advadnoun). Katherine's notebook spawned a litany of variants, each with their own twist on the technique or adding a feature to someone else's work. Henry Rachootin ([@sportsracer48](https://twitter.com/sportsracer48)) collected several of the most interesting notebooks and stuck the important bits together with bublegum and scotch tape. Thus was born PYTTI, and there was much rejoicing in sportsracer48's patreon, where it was shared in closed beta for several months so sportsracer48 wouldn't get buried under tech support requests (or so he hoped).PYTTI rapidly gained a reputation as one of the most powerful tools available for generating CLIP-guided images. In late November, @sportsracer48 released the last version in his closed beta: the "pytti 5 beta" notebook. David Marx ([@DigThatData](https://twitter.com/DigThatData)) offered to help tidy up the mess a few weeks later, and sportsracer48 encouraged him to run wild with it. Henry didn't realize he'd been speaking with someone who had recently quit their job and had a lot of time on their hands, and David's contributions snowballed into [PYTTI-Tools](https://github.com/pytti-tools)! How is PYTTI-Tools different from PYTTI 5 Beta?Right now, not very. The main user-visible changes are:* Local use is now a first-class citizen* PyTTI is installable and can be run as a CLI tool* Using PyTTI on the command line gives you magic powers* PyTTI supports tensorboard, meaning it also integrates with tools like MLFlow and WandB* Bug fixes and slightly saner code Call to action!My hope is that rather than continuing to fork off messy notebooks with minor changes between them, pytti-tools will become a central hub for organizing and sharing related techniques for this kind of generative art in a way that will enable methods to be shared and combined more fluidly than the 2021 paradigm of doing everything in colab permitted. If you're interested in contributing (even if you aren't a coder and just have an idea for something to add to the documentation), please visit our issue tracker: https://github.com/pytti-tools/pytti-core/issuesPlease help me untangle this thing before it swallows me whole. Thanks!`--The Management` Instructions `scenes:` Descriptions of scenes you want generated, separated by `||`. Each scene can contain multiple prompts, separated by `|`.*Example:* `Winter sunrise | icy landscape || Winter day | snowy skyline || Winter sunset | chilly air || Winter night | clear sky` would go through several winter scenes.**Advanced:** weight prompts with `description:weight`. Higher `weight` values will be prioritized by the optimizer, and negative `weight` values will remove the description from the image. The default weight is $1$. Weights can also be functions of $t$ to change over the course of an animation.*Example scene:* `blue sky:10|martian landscape|red sky:-1` would try to turn the martian sky blue.**Advanced:** stop prompts once the image matches them sufficiently with `description:weight:stop`. `stop` should be between $0$ and $1$ for positive prompts, or between $-1$ and $0$ for negative prompts. Lower `stop` values will have more effect on the image (remember that $-1<-0.5<0$). A prompt with a negative `weight` will often go haywire without a stop. Stops can also be functions of $t$ to change over the course of an animation.*Example scene:* `Feathered dinosaurs|birds:1:0.87|scales:-1:-.9|text:-1:-.9` Would try to make feathered dinosaurs, lightly like birds, without scales or text, but without making 'anti-scales' or 'anti-text.'**NEW:****Advanced:** Use `description:weight_mask description` with a text prompt as `mask`. The prompt will only be applied to areas of the image that match `mask description` according to CLIP.*Example scene:* `Khaleesi Daenerys Targaryen | mother of dragons | dragon:3_baby` would only apply the weight `dragon` to parts of the image that match `baby`, thus turning the babies that `mother` tends to make into dragons (hopefully).**Advanced:** Use `description:weight_[mask]` with a URL or path to an image, or a path to a .mp4 video to use as a `mask`. The prompt will only be applied to the masked (white) areas of the mask image. Use `description:weight_[-mask]` to apply the prompt to the black areas instead.*Example scene:* `sunlight:3_[mask.mp4]|midnight:3_[-mask.mp4]` Would apply `sunlight` in the white areas of `mask.mp4`, and `midnight` in the black areas.**Legacy:** Directional weights will still work as before, but they aren't as good as masks.**Advanced:** Use `[path or url]` as a prompt to add a semantic image prompt. This will be read by CLIP and understood as a near perfect text description of the image.*Example scene:* `[artist signature.png]:-1:-.95|[https://i.redd.it/ewpeykozy7e71.png]:3|fractal clouds|hole in the sky`---`scene_prefix:` text prepended to the beginning of each scene.*Example:* `Trending on Arstation|``scene_suffix:` text appended to the end of each scene.*Example:* ` by James Gurney``interpolation_steps:` number of steps to spend smoothly transitioning from the last scene at the start of each scene. $200$ is a good default. Set to $0$ to disable.`steps_per_scene:` total number of steps to spend rendering each scene. Should be at least `interpolation_steps`. This will indirectly control the total length of an animation.---**NEW**: `direct_image_prompts:` paths or urls of images that you want your image to look like in a literal sense, along with `weight_mask` and `stop` values, separated by `|`.Apply masks to direct image prompts with `path or url of image:weight_path or url of mask` For video masks it must be a path to an mp4 file.**Legacy** latent image prompts are no more. They are now rolled into direct image prompts.---`init_image:` path or url of start image. Works well for creating a central focus.`direct_init_weight:` Defaults to $0$. Use the initial image as a direct image prompt. Equivalent to adding `init_image:direct_init_weight` as a `direct_image_prompt`. Supports weights, masks, and stops.`semantic_init_weight:` Defaults to $0$. Defaults to $0$. Use the initial image as a semantic image prompt. Equivalent to adding `[init_image]:direct_init_weight` as a prompt to each scene in `scenes`. Supports weights, masks, and stops. **IMPORTANT** since this is a semantic prompt, you still need to put the mask in `[` `]` to denote it as a path or url, otherwise it will be read as text instead of a file.---`width`, `height:` image size. Set one of these $-1$ to derive it from the aspect ratio of the init image.`pixel_size:` integer image scale factor. Makes the image bigger. Set to $1$ for VQGAN or face VRAM issues.`smoothing_weight:` makes the image smoother. Defaults to $0$ (no smoothing). Can also be negative for that deep fried look.`image_model:` select how your image will be represented.`vqgan_model:` select your VQGAN version (only for `image_model: VQGAN`)`random_initial_palette:` if checked, palettes will start out with random colors. Otherwise they will start out as grayscale. (only for `image_model: Limited Palette`)`palette_size:` number of colors in each palette. (only for `image_model: Limited Palette`)`palettes:` total number of palettes. The image will have `palette_size*palettes` colors total. (only for `image_model: Limited Palette`)`gamma:` relative gamma value. Higher values make the image darker and higher contrast, lower values make the image lighter and lower contrast. (only for `image_model: Limited Palette`). $1$ is a good default.`hdr_weight:` how strongly the optimizer will maintain the `gamma`. Set to $0$ to disable. (only for `image_model: Limited Palette`)`palette_normalization_weight:` how strongly the optimizer will maintain the palettes' presence in the image. Prevents the image from losing palettes. (only for `image_model: Limited Palette`)`show_palette:` check this box to see the palette each time the image is displayed. (only for `image_model: Limited Palette`)`target_pallete:` path or url of an image which the model will use to make the palette it uses.`lock_pallete:` force the model to use the initial palette (most useful from restore, but will force a grayscale image or a wonky palette otherwise).---`animation_mode:` select animation mode or disable animation.`sampling_mode:` how pixels are sampled during animation. `nearest` will keep the image sharp, but may look bad. `bilinear` will smooth the image out, and `bicubic` is untested :)`infill_mode:` select how new pixels should be filled if they come in from the edge.* mirror: reflect image over boundary* wrap: pull pixels from opposite side* black: fill with black * smear: sample closest pixel in image`pre_animation_steps:` number of steps to run before animation starts, to begin with a stable image. $250$ is a good default.`steps_per_frame:` number of steps between each image move. $50$ is a good default.`frames_per_second:` number of frames to render each second. Controls how $t$ is scaled.`direct_stabilization_weight: ` keeps the current frame as a direct image prompt. For `Video Source` this will use the current frame of the video as a direct image prompt. For `2D` and `3D` this will use the shifted version of the previous frame. Also supports masks: `weight_mask.mp4`.`semantic_stabilization_weight: ` keeps the current frame as a semantic image prompt. For `Video Source` this will use the current frame of the video as a direct image prompt. For `2D` and `3D` this will use the shifted version of the previous frame. Also supports masks: `weight_[mask.mp4]` or `weight_mask phrase`.`depth_stabilization_weight: ` keeps the depth model output somewhat consistent at a *VERY* steep performance cost. For `Video Source` this will use the current frame of the video as a semantic image prompt. For `2D` and `3D` this will use the shifted version of the previous frame. Also supports masks: `weight_mask.mp4`.`edge_stabilization_weight: ` keeps the images contours somewhat consistent at very little performance cost. For `Video Source` this will use the current frame of the video as a direct image prompt with a sobel filter. For `2D` and `3D` this will use the shifted version of the previous frame. Also supports masks: `weight_mask.mp4`.`flow_stabilization_weight: ` used for `animation_mode: 3D` and `Video Source` to prevent flickering. Comes with a slight performance cost for `Video Source`, and a great one for `3D`, due to implementation differences. Also supports masks: `weight_mask.mp4`. For video source, the mask should select the part of the frame you want to move, and the rest will be treated as a still background.---`video_path: ` path to mp4 file for `Video Source``frame_stride` advance this many frames in the video for each output frame. This is surprisingly useful. Set to $1$ to render each frame. Video masks will also step at this rate.`reencode_each_frame: ` check this box to use each video frame as an `init_image` instead of warping each output frame into the init for the next. Cuts will still be detected and trigger a reencode.`flow_long_term_samples: ` Sample multiple frames into the past for consistent interpolation even with disocclusion, as described by [Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox (2016)](https://arxiv.org/abs/1604.08610). Each sample is twice as far back in the past as the last, so the earliest sampled frame is $2^{\text{long_term_flow_samples}}$ frames in the past. Set to $0$ to disable.---`translate_x:` horizontal image motion as a function of time $t$ in seconds.`translate_y:` vertical image motion as a function of time $t$ in seconds.`translate_z_3d:` forward image motion as a function of time $t$ in seconds. (only for `animation_mode:3D`)`rotate_3d:` image rotation as a quaternion $\left[r,x,y,z\right]$ as a function of time $t$ in seconds. (only for `animation_mode:3D`)`rotate_2d:` image rotation in degrees as a function of time $t$ in seconds. (only for `animation_mode:2D`)`zoom_x_2d:` horizontal image zoom as a function of time $t$ in seconds. (only for `animation_mode:2D`)`zoom_y_2d:` vertical image zoom as a function of time $t$ in seconds. (only for `animation_mode:2D`)`lock_camera:` check this box to prevent all scrolling or drifting. Makes for more stable 3D rotations. (only for `animation_mode:3D`)`field_of_view:` vertical field of view in degrees. (only for `animation_mode:3D`)`near_plane:` closest depth distance in pixels. (only for `animation_mode:3D`)`far_plane:` farthest depth distance in pixels. (only for `animation_mode:3D`)---`file_namespace:` output directory name.`allow_overwrite:` check to overwrite existing files in `file_namespace`.`display_every:` how many steps between each time the image is displayed in the notebook.`clear_every:` how many steps between each time notebook console is cleared.`display_scale:` image display scale in notebook. $1$ will show the image at full size. Does not affect saved images.`save_every:` how many steps between each time the image is saved. Set to `steps_per_frame` for consistent animation.`backups:` number of backups to keep (only the oldest backups are deleted). Large images make very large backups, so be warned. Set to `all` to save all backups. These are used for the `flow_long_term_samples` so be sure that this is at least $2^{\text{flow_long_term_samples}}+1$ for `Video Source` mode.`show_graphs:` check this to see graphs of the loss values each time the image is displayed. Disable this for local runtimes.`approximate_vram_usage:` currently broken. Don't believe its lies.---`ViTB32, ViTB16, RN50, RN50x4:` select your CLIP models. These take a lot of VRAM.`learning_rate:` how quickly the image changes.`reset_lr_each_frame:` the optimizer will adaptively change the learning rate, so this will thwart it.`seed:` pseudorandom seed.---`cutouts:` number of cutouts. Reduce this to use less VRAM at the cost of quality and speed.`cut_pow:` should be positive. Large values shrink cutouts, making the image more detailed, small values expand the cutouts, making it more coherent. $1$ is a good default. $3$ or higher can cause crashes.`cutout_border:` should be between $0$ and $1$. Allows cutouts to poke out over the edges of the image by this fraction of the image size, allowing better detail around the edges of the image. Set to $0$ to disable. $0.25$ is a good default.`border_mode:` how to fill cutouts that stick out over the edge of the image. Match with `infill_mode` for consistent infill.* clamp: move cutouts back onto image* mirror: reflect image over boundary* wrap: pull pixels from opposite side* black: fill with black * smear: sample closest pixel in image Step 1: SetupRun the cells in this section once for each runtime, or after a factory reset.
###Code
#@title 1.3 Install everything else
#@markdown Run this cell on a fresh runtime to install the libraries and modules.
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
def clone_reqs():
!git clone --recurse-submodules -j8 https://github.com/pytti-tools/pytti-core
def flush_reqs():
!rm -r pytti-core
def install_everything():
if path_exists('./pytti-core'):
try:
flush_reqs()
except Exception as ex:
logger.warning(
str(ex)
)
logger.warning(
"A `pytti` folder already exists and could not be deleted."
"If you encounter problems, try deleting that folder and trying again."
"Please report this and any other issues here: "
"https://github.com/pytti-tools/pytti-notebook/issues/new",
exc_info=True)
!git clone --branch dev --recurse-submodules -j8 https://github.com/pytti-tools/pytti-core
!pip install kornia pytorch-lightning
!pip install jupyter gdown loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install ./pytti-core/vendor/AdaBins
!pip install ./pytti-core/vendor/CLIP
!pip install ./pytti-core/vendor/GMA
!pip install ./pytti-core/vendor/taming-transformers
!pip install ./pytti-core
!mkdir -p images_out
!mkdir -p videos
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
try:
from adjustText import adjust_text
import pytti, torch
everything_installed = True
except ModuleNotFoundError:
everything_installed = False
force_install = False #@param{type:"boolean"}
if not everything_installed or force_install:
install_everything()
elif everything_installed:
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
#@title 1.1 Mount google drive (optional)
#@markdown Mounting your drive is optional but recommended. You can even restore from google randomly
#@markdown kicking you out if you mount your drive.
from google.colab import drive
drive.mount('/content/drive', force_remount = True)
!mkdir -p /content/drive/MyDrive/pytti_test
%cd /content/drive/MyDrive/pytti_test
#@title 1.2 NVIDIA-SMI (optional)
#@markdown View information about your runtime GPU.
#@markdown Google will connect you to an industrial strength GPU, which is needed to run
#@markdown this notebook. You can also disable error checking on your GPU to get some
#@markdown more VRAM, at a marginal cost to stability. You will have to restart the runtime after
#@markdown disabling it.
enable_error_checking = False#@param {type:"boolean"}
if enable_error_checking:
!nvidia-smi
else:
!nvidia-smi
!nvidia-smi -i 0 -e 0
###Output
_____no_output_____
###Markdown
Step 2: Configure ExperimentEdit the parameters, or load saved parameters, then run the model.
###Code
#@title #2.1 Parameters:
#@markdown ---
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import glob, json, random, re, math
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
#these are used to make the defaults look pretty
model_default = None
random_seed = None
all = math.inf
derive_from_init_aspect_ratio = -1
def define_parameters():
locals_before = locals().copy()
#@markdown ###Prompts:
scenes = "deep space habitation ring made of glass | galactic nebula | wow! space is full of fractal creatures darting around everywhere like fireflies"#@param{type:"string"}
scene_prefix = "astrophotography #pixelart | image credit nasa | space full of cybernetic neon:3_galactic nebula | isometric pixelart by Sachin Teng | "#@param{type:"string"}
scene_suffix = "| satellite image:-1:-.95 | text:-1:-.95 | anime:-1:-.95 | watermark:-1:-.95 | backyard telescope:-1:-.95 | map:-1:-.95"#@param{type:"string"}
interpolation_steps = 0#@param{type:"number"}
steps_per_scene = 60100#@param{type:"raw"}
#@markdown ---
#@markdown ###Image Prompts:
direct_image_prompts = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Initial image:
init_image = ""#@param{type:"string"}
direct_init_weight = ""#@param{type:"string"}
semantic_init_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Image:
#@markdown Use `image_model` to select how the model will encode the image
image_model = "Limited Palette" #@param ["VQGAN", "Limited Palette", "Unlimited Palette"]
#@markdown image_model | description | strengths | weaknesses
#@markdown --- | -- | -- | --
#@markdown VQGAN | classic VQGAN image | smooth images | limited datasets, slow, VRAM intesnsive
#@markdown Limited Palette | pytti differentiable palette | fast, VRAM scales with `palettes` | pixel images
#@markdown Unlimited Palette | simple RGB optimization | fast, VRAM efficient | pixel images
#@markdown The output image resolution will be `width` $\times$ `pixel_size` by height $\times$ `pixel_size` pixels.
#@markdown The easiest way to run out of VRAM is to select `image_model` VQGAN without reducing
#@markdown `pixel_size` to $1$.
#@markdown For `animation_mode: 3D` the minimum resoultion is about 450 by 400 pixels.
width = 180#@param {type:"raw"}
height = 112#@param {type:"raw"}
pixel_size = 4#@param{type:"number"}
smoothing_weight = 0.02#@param{type:"number"}
#@markdown `VQGAN` specific settings:
vqgan_model = "sflckr" #@param ["imagenet", "coco", "wikiart", "sflckr", "openimages"]
#@markdown `Limited Palette` specific settings:
random_initial_palette = False#@param{type:"boolean"}
palette_size = 6#@param{type:"number"}
palettes = 9#@param{type:"number"}
gamma = 1#@param{type:"number"}
hdr_weight = 0.01#@param{type:"number"}
palette_normalization_weight = 0.2#@param{type:"number"}
show_palette = False #@param{type:"boolean"}
target_palette = ""#@param{type:"string"}
lock_palette = False #@param{type:"boolean"}
#@markdown ---
#@markdown ###Animation:
animation_mode = "3D" #@param ["off","2D", "3D", "Video Source"]
sampling_mode = "bicubic" #@param ["bilinear","nearest","bicubic"]
infill_mode = "wrap" #@param ["mirror","wrap","black","smear"]
pre_animation_steps = 100#@param{type:"number"}
steps_per_frame = 50#@param{type:"number"}
frames_per_second = 12#@param{type:"number"}
#@markdown ---
#@markdown ###Stabilization Weights:
direct_stabilization_weight = ""#@param{type:"string"}
semantic_stabilization_weight = ""#@param{type:"string"}
depth_stabilization_weight = ""#@param{type:"string"}
edge_stabilization_weight = ""#@param{type:"string"}
#@markdown `flow_stabilization_weight` is used for `animation_mode: 3D` and `Video Source`
flow_stabilization_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Video Tracking:
#@markdown Only for `animation_mode: Video Source`.
video_path = ""#@param{type:"string"}
frame_stride = 1#@param{type:"number"}
reencode_each_frame = True #@param{type:"boolean"}
flow_long_term_samples = 1#@param{type:"number"}
#@markdown ---
#@markdown ###Image Motion:
translate_x = "-1700*sin(radians(1.5))" #@param{type:"string"}
translate_y = "0" #@param{type:"string"}
#@markdown `..._3d` is only used in 3D mode.
translate_z_3d = "(50+10*t)*sin(t/10*pi)**2" #@param{type:"string"}
#@markdown `rotate_3d` *must* be a `[w,x,y,z]` rotation (unit) quaternion. Use `rotate_3d: [1,0,0,0]` for no rotation.
#@markdown [Learn more about rotation quaternions here](https://eater.net/quaternions).
rotate_3d = "[cos(radians(1.5)), 0, -sin(radians(1.5))/sqrt(2), sin(radians(1.5))/sqrt(2)]"#@param{type:"string"}
#@markdown `..._2d` is only used in 2D mode.
rotate_2d = "5" #@param{type:"string"}
zoom_x_2d = "0" #@param{type:"string"}
zoom_y_2d = "0" #@param{type:"string"}
#@markdown 3D camera (only used in 3D mode):
lock_camera = True#@param{type:"boolean"}
field_of_view = 60#@param{type:"number"}
near_plane = 1#@param{type:"number"}
far_plane = 10000#@param{type:"number"}
#@markdown ---
#@markdown ###Output:
file_namespace = "default"#@param{type:"string"}
if file_namespace == '':
file_namespace = 'out'
allow_overwrite = False#@param{type:"boolean"}
base_name = file_namespace
if not allow_overwrite and path_exists(f'images_out/{file_namespace}'):
_, i = get_last_file(f'images_out/{file_namespace}',
f'^(?P<pre>{re.escape(file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
if i == 0:
print(f"WARNING: file_namespace {file_namespace} already has images from run 0")
elif i is not None:
print(f"WARNING: file_namespace {file_namespace} already has images from runs 0 through {i}")
elif glob.glob(f'images_out/{file_namespace}/{base_name}_*.png'):
print(f"WARNING: file_namespace {file_namespace} has images which will be overwritten")
try:
del i
del _
except NameError:
pass
del base_name
display_every = steps_per_frame #@param{type:"raw"}
clear_every = 0 #@param{type:"raw"}
display_scale = 1#@param{type:"number"}
save_every = steps_per_frame #@param{type:"raw"}
backups = 2**(flow_long_term_samples+1)+1#this is used for video transfer, so don't lower it if that's what you're doing#@param {type:"raw"}
show_graphs = False #@param{type:"boolean"}
approximate_vram_usage = False#@param{type:"boolean"}
#@markdown ---
#@markdown ###Model:
#@markdown Quality settings from Dribnet's CLIPIT (https://github.com/dribnet/clipit).
#@markdown Selecting too many will use up all your VRAM and slow down the model.
#@markdown I usually use ViTB32, ViTB16, and RN50 if I get a A100, otherwise I just use ViT32B.
#@markdown quality | CLIP models
#@markdown --- | --
#@markdown draft | ViTB32
#@markdown normal | ViTB32, ViTB16
#@markdown high | ViTB32, ViTB16, RN50
#@markdown best | ViTB32, ViTB16, RN50x4
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
RN50 = False #@param{type:"boolean"}
RN50x4 = False #@param{type:"boolean"}
#@markdown the default learning rate is `0.1` for all the VQGAN models
#@markdown except openimages, which is `0.15`. For the palette modes the
#@markdown default is `0.02`.
learning_rate = model_default#@param{type:"raw"}
reset_lr_each_frame = True#@param{type:"boolean"}
seed = random_seed #@param{type:"raw"}
#@markdown **Cutouts**:
#@markdown [Cutouts are how CLIP sees the image.](https://twitter.com/remi_durant/status/1460607677801897990)
cutouts = 40#@param{type:"number"}
cut_pow = 2#@param {type:"number"}
cutout_border = .25#@param {type:"number"}
#@markdown NOTE: prompt masks (`promt:weight_[mask.png]`) will not work right on '`wrap`' or '`mirror`' mode.
border_mode = "clamp" #@param ["clamp","mirror","wrap","black","smear"]
if seed is None:
seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
params = Bunch(define_parameters())
print("SETTINGS:")
print(json.dumps(params))
#@title 2.2 Load settings (optional)
#@markdown copy the `SETTINGS:` output from the **Parameters** cell (tripple click to select the whole
#@markdown line from `{'scenes'...` to `}`) and paste them in a note to save them for later.
#@markdown Paste them here in the future to load those settings again. Running this cell with blank settings won't do anything.
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import *
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import json, random
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
settings = ""#@param{type:"string"}
#@markdown Check `random_seed` to overwrite the seed from the settings with a random one for some variation.
random_seed = False #@param{type:"boolean"}
if settings != '':
params = load_settings(settings, random_seed)
from pytti.workhorse import _main, TB_LOGDIR
%load_ext tensorboard
%tensorboard --logdir $TB_LOGDIR
#@title 2.3 Run it!
from omegaconf import OmegaConf
cfg = OmegaConf.create(dict(params))
# function wraps step 2.3 of the original p5 notebook
_main(cfg)
###Output
_____no_output_____
###Markdown
Step 3: Render videoYou can dowload from the notebook, but it's faster to download from your drive.
###Code
#@title 3.1 Render video
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest#@param{type:"raw"}
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
tqdm.write(f'Generating video from {params.file_namespace}/{base_name}_*.png')
all_frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
all_frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
print(f'found {len(all_frames)} frames matching images_out/{params.file_namespace}/{base_name}_*.png')
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.1 Render video (concatenate all runs)
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
all_frames = []
for i in range(run_number+1):
base_name = params.file_namespace if i == 0 else (params.file_namespace+f"({i})")
frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
all_frames.extend(frames)
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.2 Download the last exported video
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
try:
from pytti.Notebook import get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
try:
params
except NameError:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError("ERROR: please run parameters (step 2.1).")
from google.colab import files
try:
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
filename = f'{base_name}.mp4'
except NameError:
filename, i = get_last_file(f'videos',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?\\.mp4)$')
if path_exists(f'videos/{filename}'):
files.download(f"videos/{filename}")
else:
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: video videos/{filename} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: video videos/{filename} does not exist.")
###Output
_____no_output_____
###Markdown
Batch SettingsBe Advised: google may penalize you for sustained colab GPU utilization, even if you are a PRO+ subscriber. Tread lightly with batch runs, you don't wanna end up in GPU jail. FYI: the batch setting feature below may not work at present. We recommend using the CLI for batch jobs, see usage instructions at https://github.com/pytti-tools/pytti-core . The code below will probably be removed in the near future. Batch SetingsWARNING: If you use google colab (even with pro and pro+) GPUs for long enought google will throttle your account. Be careful with batch runs if you don't want to get kicked.
###Code
#@title batch settings
# ngl... this probably doesn't work right now.
from os.path import exists as path_exists
if path_exists('/content/drive/MyDrive/pytti_test'):
%cd /content/drive/MyDrive/pytti_test
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, save_batch
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
change_tqdm_color()
try:
import exrex, random, glob
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
from numpy import arange
import itertools
def all_matches(s):
return list(exrex.generate(s))
def dict_product(dictionary):
return [dict(zip(dictionary, x)) for x in itertools.product(*dictionary.values())]
#these are used to make the defaults look pretty
model_default = None
random_seed = None
def define_parameters():
locals_before = locals().copy()
scenes = ["list","your","runs"] #@param{type:"raw"}
scene_prefix = ["all "," permutations "," are run "] #@param{type:"raw"}
scene_suffix = [" that", " makes", " 27" ] #@param{type:"raw"}
interpolation_steps = [0] #@param{type:"raw"}
steps_per_scene = [300] #@param{type:"raw"}
direct_image_prompts = [""] #@param{type:"raw"}
init_image = [""] #@param{type:"raw"}
direct_init_weight = [""] #@param{type:"raw"}
semantic_init_weight = [""] #@param{type:"raw"}
image_model = ["Limited Palette"] #@param{type:"raw"}
width = [180] #@param{type:"raw"}
height = [112] #@param{type:"raw"}
pixel_size = [4] #@param{type:"raw"}
smoothing_weight = [0.05] #@param{type:"raw"}
vqgan_model = ["sflckr"] #@param{type:"raw"}
random_initial_palette = [False] #@param{type:"raw"}
palette_size = [9] #@param{type:"raw"}
palettes = [8] #@param{type:"raw"}
gamma = [1] #@param{type:"raw"}
hdr_weight = [1.0] #@param{type:"raw"}
palette_normalization_weight = [1.0] #@param{type:"raw"}
show_palette = [False] #@param{type:"raw"}
target_palette = [""] #@param{type:"raw"}
lock_palette = [False] #@param{type:"raw"}
animation_mode = ["off"] #@param{type:"raw"}
sampling_mode = ["bicubic"] #@param{type:"raw"}
infill_mode = ["wrap"] #@param{type:"raw"}
pre_animation_steps = [100] #@param{type:"raw"}
steps_per_frame = [50] #@param{type:"raw"}
frames_per_second = [12] #@param{type:"raw"}
direct_stabilization_weight = [""] #@param{type:"raw"}
semantic_stabilization_weight = [""] #@param{type:"raw"}
depth_stabilization_weight = [""] #@param{type:"raw"}
edge_stabilization_weight = [""] #@param{type:"raw"}
flow_stabilization_weight = [""] #@param{type:"raw"}
video_path = [""] #@param{type:"raw"}
frame_stride = [1] #@param{type:"raw"}
reencode_each_frame = [True] #@param{type:"raw"}
flow_long_term_samples = [0] #@param{type:"raw"}
translate_x = ["0"] #@param{type:"raw"}
translate_y = ["0"] #@param{type:"raw"}
translate_z_3d = ["0"] #@param{type:"raw"}
rotate_3d = ["[1,0,0,0]"] #@param{type:"raw"}
rotate_2d = ["0"] #@param{type:"raw"}
zoom_x_2d = ["0"] #@param{type:"raw"}
zoom_y_2d = ["0"] #@param{type:"raw"}
lock_camera = [True] #@param{type:"raw"}
field_of_view = [60] #@param{type:"raw"}
near_plane = [1] #@param{type:"raw"}
far_plane = [10000] #@param{type:"raw"}
file_namespace = ["Basic Batch"] #@param{type:"raw"}
allow_overwrite = [False]
display_every = [50] #@param{type:"raw"}
clear_every = [0] #@param{type:"raw"}
display_scale = [1] #@param{type:"raw"}
save_every = [50] #@param{type:"raw"}
backups = [2] #@param{type:"raw"}
show_graphs = [False] #@param{type:"raw"}
approximate_vram_usage = [False] #@param{type:"raw"}
ViTB32 = [True] #@param{type:"raw"}
ViTB16 = [False] #@param{type:"raw"}
RN50 = [False] #@param{type:"raw"}
RN50x4 = [False] #@param{type:"raw"}
learning_rate = [None] #@param{type:"raw"}
reset_lr_each_frame = [True] #@param{type:"raw"}
seed = [None] #@param{type:"raw"}
cutouts = [40] #@param{type:"raw"}
cut_pow = [2] #@param{type:"raw"}
cutout_border = [0.25] #@param{type:"raw"}
border_mode = ["clamp"] #@param{type:"raw"}
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
param_dict = define_parameters()
batch_list = dict_product(param_dict)
namespace = batch_list[0]['file_namespace']
if glob.glob(f'images_out/{namespace}/*.png'):
print(f"WARNING: images_out/{namespace} contains images. Batch indicies may not match filenames unless restoring.")
# @title Licensed under the MIT License
# Copyleft (c) 2021 Henry Rachootin
# Copyright (c) 2022 David Marx
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###Output
_____no_output_____
###Markdown
PyTTI-Tools Colab NotebookIf you are using PyTTI-tools from a local jupyter server, you might have a better experience with the "_local" notebook: https://github.com/pytti-tools/pytti-notebook/blob/main/pyttitools-PYTTI_local.ipynbIf you are planning to use google colab with the "local runtime" option: this is still the notebook you want. A very brief history of this notebookThe tools and techniques below were pioneered in 2021 by a diverse and distributed collection of amazingly talented ML practitioners, researchers, and artists. The short version of this history is that Katherine Crowson ([@RiversHaveWings](https://twitter.com/RiversHaveWings)) published a notebook inspired by work done by [@advadnoun](https://twitter.com/advadnoun). Katherine's notebook spawned a litany of variants, each with their own twist on the technique or adding a feature to someone else's work. Henry Rachootin ([@sportsracer48](https://twitter.com/sportsracer48)) collected several of the most interesting notebooks and stuck the important bits together with bublegum and scotch tape. Thus was born PyTTI, and there was much rejoicing in sportsracer48's patreon, where it was shared in closed beta for several months. David Marx ([@DigThatData](https://twitter.com/DigThatData)) offered to help tidy up the mess, and sportsracer48 encouraged him to run wild with it. David's contributions snowballed into [PyTTI-Tools](https://github.com/pytti-tools), the engine this notebook sits on top of!If you would like to contribute, receive support, or even just suggest an improvement to the documentation, our issue tracker can be found here: https://github.com/pytti-tools/pytti-core/issues InstructionsDetailed documentation can be found here: https://pytti-tools.github.io/pytti-book/intro.html* Syntax for text prompts and scenes: https://pytti-tools.github.io/pytti-book/SceneDSL.html* Descriptions of all settings: https://pytti-tools.github.io/pytti-book/Settings.html Step 1. Setup the environment
###Code
# @title 1.1 Set up storage locations { display-mode: "form" }
drive_mounted = False
gdrive_fpath = '.'
#@markdown Mounting your google drive is optional but recommended. You can even restore from google randomly
#@markdown kicking you out if you mount your drive.
from pathlib import Path
mount_gdrive = False # @param{type:"boolean"}
if mount_gdrive and not drive_mounted:
from google.colab import drive
gdrive_mountpoint = '/content/drive/' #@param{type:"string"}
gdrive_subdirectory = 'MyDrive/pytti_tools' #@param{type:"string"}
gdrive_fpath = str(Path(gdrive_mountpoint) / gdrive_subdirectory)
try:
drive.mount(gdrive_mountpoint, force_remount = True)
!mkdir -p {gdrive_fpath}
%cd {gdrive_fpath}
drive_mounted = True
except OSError:
print(
"\n\n-----[PYTTI-TOOLS]-------\n\n"
"If you received a scary OSError and your drive"
" was already mounted, ignore it."
"\n\n-----[PYTTI-TOOLS]-------\n\n"
)
raise
# @title 1.2 Check GPU { display-mode: "form"}
# @markdown Running this cell just gives you information about the GPU attached to your session.
#https://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf
#!nvidia-smi --query-gpu=timestamp,name,utilization.gpu,utilization.memory,memory.free,memory.used --format=csv
import pandas as pd
import subprocess
outv = subprocess.run(['nvidia-smi', '--query-gpu=timestamp,name,utilization.gpu,utilization.memory,memory.free,memory.used', '--format=csv'], stdout=subprocess.PIPE).stdout.decode('utf-8')
header, rec = outv.split('\n')[:-1]
pd.DataFrame({k:v for k,v in zip(header.split(','), rec.split(','))}, index=[0]).T
%%capture
#@title 1.3 Install everything else
#@markdown Run this cell on a fresh runtime to install the libraries and modules.
#@markdown This may take a few minutes.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
def install_pip_deps():
!pip install kornia pytorch-lightning transformers
!pip install jupyter loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install --upgrade gdown
def instal_gh_deps():
# not sure the "upgrade" arg does anything here, just feels like a good idea
!pip install --upgrade git+https://github.com/pytti-tools/AdaBins.git
!pip install --upgrade git+https://github.com/pytti-tools/GMA.git
!pip install --upgrade git+https://github.com/pytti-tools/taming-transformers.git
!pip install --upgrade git+https://github.com/openai/CLIP.git
!pip install --upgrade git+https://github.com/pytti-tools/pytti-core.git
try:
import pytti
except:
install_pip_deps()
instal_gh_deps()
# Preload unopinionated defaults
# makes it so users don't have to run every setup cell
from omegaconf import OmegaConf
!python -m pytti.warmup
path_to_default = 'config/default.yaml'
params = OmegaConf.load(path_to_default)
# setup for step 2
import math
model_default = None
random_seed = None
seed = random_seed
all = math.inf
derive_from_init_aspect_ratio = -1
########################
try:
import mmc
except:
# install mmc
!git clone https://github.com/dmarx/Multi-Modal-Comparators
!pip install poetry
!cd Multi-Modal-Comparators; poetry build
!cd Multi-Modal-Comparators; pip install dist/mmc*.whl
# optional final step:
#poe napm_installs
!python Multi-Modal-Comparators/src/mmc/napm_installs/__init__.py
# suppress mmc warmup outputs
import mmc.loaders
###Output
_____no_output_____
###Markdown
Step 2: Configure ExperimentEdit the parameters, or load saved parameters, then run the model.* https://pytti-tools.github.io/pytti-book/SceneDSL.html* https://pytti-tools.github.io/pytti-book/Settings.htmlTo input previously used settings or settings generated using tools such as https://pyttipanna.xyz/ , jump down to cell 4.1
###Code
# @title Prompt Settings { display-mode: 'form' }
scenes = "" # @param{type:"string"}
scene_suffix = "" # @param{type:"string"}
scene_prefix = "" # @param{type:"string"}
params.scenes = scenes
params.scene_prefix = scene_prefix
params.scene_suffix = scene_suffix
direct_image_prompts = "" # @param{type:"string"}
init_image = "" # @param{type:"string"}
direct_init_weight = "" # @param{type:"string"}
semantic_init_weight = "" # @param{type:"string"}
params.direct_image_prompts = direct_image_prompts
params.init_image = init_image
params.direct_init_weight = direct_init_weight
params.semantic_init_weight = semantic_init_weight
interpolation_steps = 0 # @param{type:"number"}
steps_per_scene = 50 # @param{type:"raw"}
steps_per_frame = 50 # @param{type:"number"}
save_every = steps_per_frame # @param{type:"raw"}
params.interpolation_steps = interpolation_steps
params.steps_per_scene = steps_per_scene
params.steps_per_frame = steps_per_frame
params.save_every = save_every
# @title Misc Run Initialization { display-mode: 'form' }
import random
#@markdown Check this box to pick up where you left off from a previous run, e.g. if the google colab runtime timed out
resume = False #@param{type:"boolean"}
params.resume = resume
seed = random_seed #@param{type:"raw"}
params.seed = seed
if params.seed is None:
params.seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
###Output
_____no_output_____
###Markdown
Image Settings
###Code
# @title General Image Settings { display-mode: 'form' }
#@markdown Use `image_model` to select how the model will encode the image
image_model = "VQGAN" #@param ["VQGAN", "Limited Palette", "Unlimited Palette"]
params.image_model = image_model
#@markdown image_model | description | strengths | weaknesses
#@markdown --- | -- | -- | --
#@markdown VQGAN | classic VQGAN image | smooth images | limited datasets, slow, VRAM intesnsive
#@markdown Limited Palette | pytti differentiable palette | fast, VRAM scales with `palettes` | pixel images
#@markdown Unlimited Palette | simple RGB optimization | fast, VRAM efficient | pixel images
vqgan_model = "imagenet" #@param ["imagenet", "coco", "wikiart", "sflckr", "openimages"]
params.vqgan_model = vqgan_model
#@markdown The output image resolution will be `width` $\times$ `pixel_size` by height $\times$ `pixel_size` pixels.
#@markdown The easiest way to run out of VRAM is to select `image_model` VQGAN without reducing
#@markdown `pixel_size` to $1$.
#@markdown For `animation_mode: 3D` the minimum resoultion is about 450 by 400 pixels.
width = 180 # @param {type:"raw"}
height = 112 # @param {type:"raw"}
params.width = width
params.height = height
#@markdown the default learning rate is `0.1` for all the VQGAN models
#@markdown except openimages, which is `0.15`. For the palette modes the
#@markdown default is `0.02`.
learning_rate = model_default #@param{type:"raw"}
reset_lr_each_frame = True #@param{type:"boolean"}
params.learning_rate = learning_rate
params.reset_lr_each_frame = reset_lr_each_frame
# @title Advanced Color and Appearance options { display-mode: 'form', run: 'auto' }
pixel_size = 4#@param{type:"number"}
smoothing_weight = 0.02#@param{type:"number"}
params.pixel_size = pixel_size
params.smoothing_weight = smoothing_weight
#@markdown "Limited Palette" specific settings:
random_initial_palette = False#@param{type:"boolean"}
palette_size = 6#@param{type:"number"}
palettes = 9#@param{type:"number"}
params.random_initial_palette = random_initial_palette
params.palette_size = palette_size
params.palettes = palettes
gamma = 1#@param{type:"number"}
hdr_weight = 0.01#@param{type:"number"}
palette_normalization_weight = 0.2#@param{type:"number"}
target_palette = ""#@param{type:"string"}
lock_palette = False #@param{type:"boolean"}
show_palette = False #@param{type:"boolean"}
params.gamma = gamma
params.hdr_weight = hdr_weight
params.palette_normalization_weight = palette_normalization_weight
params.target_palette = target_palette
params.lock_palette = lock_palette
params.show_palette = show_palette
###Output
_____no_output_____
###Markdown
Perceptor Settings
###Code
# @title Perceptor Models { display-mode: 'form', run: 'auto' }
#@markdown Quality settings from Dribnet's CLIPIT (https://github.com/dribnet/clipit).
#@markdown Selecting too many will use up all your VRAM and slow down the model.
#@markdown I usually use ViTB32, ViTB16, and RN50 if I get a A100, otherwise I just use ViT32B.
#@markdown quality | CLIP models
#@markdown --- | --
#@markdown draft | ViTB32
#@markdown normal | ViTB32, ViTB16
#@markdown high | ViTB32, ViTB16, RN50
#@markdown best | ViTB32, ViTB16, RN50x4
# To do: change this to a multi-select
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
ViTL14 = False #@param{type:"boolean"}
ViTL14_336px = False #@param{type:"boolean"}
RN50 = False #@param{type:"boolean"}
RN101 = False #@param{type:"boolean"}
RN50x4 = False #@param{type:"boolean"}
RN50x16 = False #@param{type:"boolean"}
RN50x64 = False #@param{type:"boolean"}
params.ViTB32 = ViTB32
params.ViTB16 = ViTB16
params.ViTL14 = ViTL14
params.ViTL14_336px = ViTL14_336px
params.RN50 = RN50
params.RN101 = RN101
params.RN50x4 = RN50x4
params.RN50x16 = RN50x16
params.RN50x64 = RN50x64
# @title MMC Perceptors { display-mode: 'form' }
#@markdown This cell loads perceptor models via https://github.com/dmarx/multi-modal-comparators. Some model comparisons [here](https://t.co/iShJpm5GjL)
# @markdown Select up to three models
# @markdown Model 1
model1 = "" # @param ["[clip - openai - RN50]","[clip - openai - RN101]","[clip - openai - RN50x4]","[clip - openai - RN50x16]","[clip - openai - RN50x64]","[clip - openai - ViT-B/32]","[clip - openai - ViT-B/16]","[clip - openai - ViT-L/14]","[clip - openai - ViT-L/14@336px]","[clip - mlfoundations - RN50--openai]","[clip - mlfoundations - RN50--yfcc15m]","[clip - mlfoundations - RN50--cc12m]","[clip - mlfoundations - RN50-quickgelu--openai]","[clip - mlfoundations - RN50-quickgelu--yfcc15m]","[clip - mlfoundations - RN50-quickgelu--cc12m]","[clip - mlfoundations - RN101--openai]","[clip - mlfoundations - RN101--yfcc15m]","[clip - mlfoundations - RN101-quickgelu--openai]","[clip - mlfoundations - RN101-quickgelu--yfcc15m]","[clip - mlfoundations - RN50x4--openai]","[clip - mlfoundations - RN50x16--openai]","[clip - mlfoundations - ViT-B-32--openai]","[clip - mlfoundations - ViT-B-32--laion400m_e31]","[clip - mlfoundations - ViT-B-32--laion400m_e32]","[clip - mlfoundations - ViT-B-32--laion400m_avg]","[clip - mlfoundations - ViT-B-32-quickgelu--openai]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e31]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e32]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_avg]","[clip - mlfoundations - ViT-B-16--openai]","[clip - mlfoundations - ViT-B-16--laion400m_e31]","[clip - mlfoundations - ViT-B-16--laion400m_e32]","[clip - mlfoundations - ViT-L-14--openai]","[clip - mlfoundations - ViT-L-14-336--openai]","[clip - sbert - ViT-B-32-multilingual-v1]","[clip - sajjjadayobi - clipfa]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_16_epochs]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_32_epochs]","[clip - navervision - kelip_ViT-B/32]","[clip - facebookresearch - clip_small_25ep]","[simclr - facebookresearch - simclr_small_25ep]","[slip - facebookresearch - slip_small_25ep]","[slip - facebookresearch - slip_small_50ep]","[slip - facebookresearch - slip_small_100ep]","[clip - facebookresearch - clip_base_25ep]","[simclr - facebookresearch - simclr_base_25ep]","[slip - facebookresearch - slip_base_25ep]","[slip - facebookresearch - slip_base_50ep]","[slip - facebookresearch - slip_base_100ep]","[clip - facebookresearch - clip_large_25ep]","[simclr - facebookresearch - simclr_large_25ep]","[slip - facebookresearch - slip_large_25ep]","[slip - facebookresearch - slip_large_50ep]","[slip - facebookresearch - slip_large_100ep]","[clip - facebookresearch - clip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc12m_35ep]","[clip - facebookresearch - clip_base_cc12m_35ep]"] {allow-input: true}
model2 = "" # @param ["[clip - openai - RN50]","[clip - openai - RN101]","[clip - openai - RN50x4]","[clip - openai - RN50x16]","[clip - openai - RN50x64]","[clip - openai - ViT-B/32]","[clip - openai - ViT-B/16]","[clip - openai - ViT-L/14]","[clip - openai - ViT-L/14@336px]","[clip - mlfoundations - RN50--openai]","[clip - mlfoundations - RN50--yfcc15m]","[clip - mlfoundations - RN50--cc12m]","[clip - mlfoundations - RN50-quickgelu--openai]","[clip - mlfoundations - RN50-quickgelu--yfcc15m]","[clip - mlfoundations - RN50-quickgelu--cc12m]","[clip - mlfoundations - RN101--openai]","[clip - mlfoundations - RN101--yfcc15m]","[clip - mlfoundations - RN101-quickgelu--openai]","[clip - mlfoundations - RN101-quickgelu--yfcc15m]","[clip - mlfoundations - RN50x4--openai]","[clip - mlfoundations - RN50x16--openai]","[clip - mlfoundations - ViT-B-32--openai]","[clip - mlfoundations - ViT-B-32--laion400m_e31]","[clip - mlfoundations - ViT-B-32--laion400m_e32]","[clip - mlfoundations - ViT-B-32--laion400m_avg]","[clip - mlfoundations - ViT-B-32-quickgelu--openai]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e31]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e32]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_avg]","[clip - mlfoundations - ViT-B-16--openai]","[clip - mlfoundations - ViT-B-16--laion400m_e31]","[clip - mlfoundations - ViT-B-16--laion400m_e32]","[clip - mlfoundations - ViT-L-14--openai]","[clip - mlfoundations - ViT-L-14-336--openai]","[clip - sbert - ViT-B-32-multilingual-v1]","[clip - sajjjadayobi - clipfa]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_16_epochs]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_32_epochs]","[clip - navervision - kelip_ViT-B/32]","[clip - facebookresearch - clip_small_25ep]","[simclr - facebookresearch - simclr_small_25ep]","[slip - facebookresearch - slip_small_25ep]","[slip - facebookresearch - slip_small_50ep]","[slip - facebookresearch - slip_small_100ep]","[clip - facebookresearch - clip_base_25ep]","[simclr - facebookresearch - simclr_base_25ep]","[slip - facebookresearch - slip_base_25ep]","[slip - facebookresearch - slip_base_50ep]","[slip - facebookresearch - slip_base_100ep]","[clip - facebookresearch - clip_large_25ep]","[simclr - facebookresearch - simclr_large_25ep]","[slip - facebookresearch - slip_large_25ep]","[slip - facebookresearch - slip_large_50ep]","[slip - facebookresearch - slip_large_100ep]","[clip - facebookresearch - clip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc12m_35ep]","[clip - facebookresearch - clip_base_cc12m_35ep]"] {allow-input: true}
model3 = "" # @param ["[clip - openai - RN50]","[clip - openai - RN101]","[clip - openai - RN50x4]","[clip - openai - RN50x16]","[clip - openai - RN50x64]","[clip - openai - ViT-B/32]","[clip - openai - ViT-B/16]","[clip - openai - ViT-L/14]","[clip - openai - ViT-L/14@336px]","[clip - mlfoundations - RN50--openai]","[clip - mlfoundations - RN50--yfcc15m]","[clip - mlfoundations - RN50--cc12m]","[clip - mlfoundations - RN50-quickgelu--openai]","[clip - mlfoundations - RN50-quickgelu--yfcc15m]","[clip - mlfoundations - RN50-quickgelu--cc12m]","[clip - mlfoundations - RN101--openai]","[clip - mlfoundations - RN101--yfcc15m]","[clip - mlfoundations - RN101-quickgelu--openai]","[clip - mlfoundations - RN101-quickgelu--yfcc15m]","[clip - mlfoundations - RN50x4--openai]","[clip - mlfoundations - RN50x16--openai]","[clip - mlfoundations - ViT-B-32--openai]","[clip - mlfoundations - ViT-B-32--laion400m_e31]","[clip - mlfoundations - ViT-B-32--laion400m_e32]","[clip - mlfoundations - ViT-B-32--laion400m_avg]","[clip - mlfoundations - ViT-B-32-quickgelu--openai]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e31]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_e32]","[clip - mlfoundations - ViT-B-32-quickgelu--laion400m_avg]","[clip - mlfoundations - ViT-B-16--openai]","[clip - mlfoundations - ViT-B-16--laion400m_e31]","[clip - mlfoundations - ViT-B-16--laion400m_e32]","[clip - mlfoundations - ViT-L-14--openai]","[clip - mlfoundations - ViT-L-14-336--openai]","[clip - sbert - ViT-B-32-multilingual-v1]","[clip - sajjjadayobi - clipfa]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_16_epochs]","[cloob - crowsonkb - cloob_laion_400m_vit_b_16_32_epochs]","[clip - navervision - kelip_ViT-B/32]","[clip - facebookresearch - clip_small_25ep]","[simclr - facebookresearch - simclr_small_25ep]","[slip - facebookresearch - slip_small_25ep]","[slip - facebookresearch - slip_small_50ep]","[slip - facebookresearch - slip_small_100ep]","[clip - facebookresearch - clip_base_25ep]","[simclr - facebookresearch - simclr_base_25ep]","[slip - facebookresearch - slip_base_25ep]","[slip - facebookresearch - slip_base_50ep]","[slip - facebookresearch - slip_base_100ep]","[clip - facebookresearch - clip_large_25ep]","[simclr - facebookresearch - simclr_large_25ep]","[slip - facebookresearch - slip_large_25ep]","[slip - facebookresearch - slip_large_50ep]","[slip - facebookresearch - slip_large_100ep]","[clip - facebookresearch - clip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc3m_40ep]","[slip - facebookresearch - slip_base_cc12m_35ep]","[clip - facebookresearch - clip_base_cc12m_35ep]"] {allow-input: true}
##########
params.use_mmc = False
mmc_models = []
for model_key in (model1, model2, model3):
if not model_key:
continue
arch, pub, m_id = model_key[1:-1].split(' - ')
params.use_mmc = True
mmc_models.append({
'architecture':arch,
'publisher':pub,
'id':m_id,
})
params.mmc_models = mmc_models
# @title Cutouts { display-mode: 'form', run: 'auto' }
#@markdown [Cutouts are how CLIP sees the image.](https://twitter.com/remi_durant/status/1460607677801897990)
cutouts = 40#@param{type:"number"}
cut_pow = 2#@param {type:"number"}
cutout_border = .25#@param {type:"number"}
gradient_accumulation_steps = 1 #@param {type:"number"}
params.cutouts = cutouts
params.cut_pow = cut_pow
params.cutout_border = cutout_border
params.gradient_accumulation_steps = gradient_accumulation_steps
###Output
_____no_output_____
###Markdown
Animation Settings
###Code
# @title General Animation Settings { display-mode: 'form', run: 'auto' }
animation_mode = "off" #@param ["off","2D", "3D", "Video Source"]
pre_animation_steps = 0 # @param{type:"number"}
frames_per_second = 12 # @param{type:"number"}
params.animation_mode = animation_mode
params.pre_animation_steps = pre_animation_steps
params.frames_per_second = frames_per_second
# @markdown NOTE: prompt masks (`prompt:weight_[mask.png]`) may not work correctly on '`wrap`' or '`mirror`' border mode.
border_mode = "clamp" # @param ["clamp","mirror","wrap","black","smear"]
sampling_mode = "bicubic" #@param ["bilinear","nearest","bicubic"]
infill_mode = "wrap" #@param ["mirror","wrap","black","smear"]
params.border_mode = border_mode
params.sampling_mode = sampling_mode
params.infill_mode = infill_mode
# @title Video Input { display-mode: 'form', run: 'auto' }
video_path = ""# @param{type:"string"}
frame_stride = 1 #@param{type:"number"}
reencode_each_frame = False #@param{type:"boolean"}
params.video_path = video_path
params.frame_stride = frame_stride
params.reencode_each_frame = reencode_each_frame
# @title Audio Input { display-mode: 'form', run: 'auto' }
input_audio = ""# @param{type:"string"}
input_audio_offset = 0 #@param{type:"number"}
# @markdown Bandpass filter specification
variable_name = 'fAudio'
f_center = 1000 # @param{type:"number"}
f_width = 1990 # @param{type:"number"}
order = 5 # @param{type:"number"}
if input_audio:
params.input_audio = input_audio
params.input_audio_offset = input_audio_offset
params.input_audio_filters = [{
'variable_name':variable_name,
'f_center':f_center,
'f_width':f_width,
'order':order
}]
# @title Image Motion Settings { display-mode: 'form', run: 'auto' }
# @markdown settings whose names end in `_2d` or `_3d` are specific to those animation modes
# @markdown `rotate_3d` *must* be a `[w,x,y,z]` rotation (unit) quaternion. Use `rotate_3d: [1,0,0,0]` for no rotation.
# @markdown [Learn more about rotation quaternions here](https://eater.net/quaternions).
translate_x = "0" # @param{type:"string"}
translate_y = "0" # @param{type:"string"}
translate_z_3d = "0" # @param{type:"string"}
rotate_3d = "[1,0,0,0]" # @param{type:"string"}
rotate_2d = "0" # @param{type:"string"}
zoom_x_2d = "0" # @param{type:"string"}
zoom_y_2d = "0" # @param{type:"string"}
params.translate_x = translate_x
params.translate_y = translate_y
params.translate_z_3d = translate_z_3d
params.rotate_3d = rotate_3d
params.rotate_2d = rotate_2d
params.zoom_x_2d = zoom_x_2d
params.zoom_y_2d = zoom_y_2d
#@markdown 3D camera (only used in 3D mode):
lock_camera = True # @param{type:"boolean"}
field_of_view = 60 # @param{type:"number"}
near_plane = 1 # @param{type:"number"}
far_plane = 10000 # @param{type:"number"}
params.lock_camera = lock_camera
params.field_of_view = field_of_view
params.near_plane = near_plane
params.far_plane = far_plane
# @title Stabilization Weights and Perspective { display-mode: 'form', run: 'auto' }
# @markdown `flow_stabilization_weight` is used for `animation_mode: 3D` and `Video Source`
direct_stabilization_weight = "" # @param{type:"string"}
semantic_stabilization_weight = "" # @param{type:"string"}
depth_stabilization_weight = "" # @param{type:"string"}
edge_stabilization_weight = "" # @param{type:"string"}
params.direct_stabilization_weight = direct_stabilization_weight
params.semantic_stabilization_weight = semantic_stabilization_weight
params.depth_stabilization_weight = depth_stabilization_weight
params.edge_stabilization_weight = edge_stabilization_weight
flow_stabilization_weight = "" # @param{type:"string"}
flow_long_term_samples = 1 # @param{type:"number"}
params.flow_stabilization_weight = flow_stabilization_weight
params.flow_long_term_samples = flow_long_term_samples
###Output
_____no_output_____
###Markdown
Output Settings
###Code
# @title Output and Storage Location { display-mode: 'form', run: 'auto' }
# should I move google drive stuff here?
models_parent_dir = '.' #@param{type:"string"}
params.models_parent_dir = models_parent_dir
file_namespace = "default" #@param{type:"string"}
params.file_namespace = file_namespace
if params.file_namespace == '':
params.file_namespace = 'out'
allow_overwrite = False #@param{type:"boolean"}
base_name = params.file_namespace
params.allow_overwrite = allow_overwrite
params.base_name = base_name
#@markdown `backups` is used for video transfer, so don't lower it if that's what you're doing
backups = 2**(params.flow_long_term_samples+1)+1 #@param {type:"raw"}
params.backups = backups
from pytti.Notebook import get_last_file
import glob
import re
# to do: move this logic into pytti-core
if not params.allow_overwrite and path_exists(f'images_out/{params.file_namespace}'):
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
if i == 0:
print(f"WARNING: file_namespace {params.file_namespace} already has images from run 0")
elif i is not None:
print(f"WARNING: file_namespace {params.file_namespace} already has images from runs 0 through {i}")
elif glob.glob(f'images_out/{params.file_namespace}/{params.base_name}_*.png'):
print(f"WARNING: file_namespace {params.file_namespace} has images which will be overwritten")
# @title Experiment Monitoring { display-mode: 'form', run: 'auto' }
display_every = steps_per_frame # @param{type:"raw"}
clear_every = 0 # @param{type:"raw"}
display_scale = 1 # @param{type:"number"}
params.display_every = display_every
params.clear_every = clear_every
params.display_scale = display_scale
show_graphs = False # @param{type:"boolean"}
use_tensorboard = False #@param{type:"boolean"}
params.show_graphs = show_graphs
params.use_tensorboard = use_tensorboard
# needs to be populated or will fail validation
params.approximate_vram_usage=False
print("SETTINGS:")
print(OmegaConf.to_container(params))
###Output
_____no_output_____
###Markdown
2.3 Run it!
###Code
#@markdown Execute this cell to start image generation
from pytti.workhorse import _main as render_frames
import random
if (seed is None) or (params.seed is None):
params.seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
render_frames(params)
###Output
_____no_output_____
###Markdown
Step 3: Render videoYou can dowload from the notebook, but it's faster to download from your drive.
###Code
#@title 3.1 Render video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest#@param{type:"raw"}
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
tqdm.write(f'Generating video from {params.file_namespace}/{base_name}_*.png')
all_frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
all_frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
print(f'found {len(all_frames)} frames matching images_out/{params.file_namespace}/{base_name}_*.png')
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
cmd_in = ['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-']
cmd_out = ['-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f'videos/{base_name}.mp4']
if params.input_audio is not None:
cmd_in += ['-i', str(params.input_audio), '-acodec', 'libmp3lame']
cmd = cmd_in + cmd_out
p = Popen(cmd, stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.2 Download the last exported video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
try:
from pytti.Notebook import get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
try:
params
except NameError:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError("ERROR: please run parameters (step 2.1).")
from google.colab import files
try:
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
filename = f'{base_name}.mp4'
except NameError:
filename, i = get_last_file(f'videos',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?\\.mp4)$')
if path_exists(f'videos/{filename}'):
files.download(f"videos/{filename}")
else:
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: video videos/{filename} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: video videos/{filename} does not exist.")
###Output
_____no_output_____
###Markdown
Sec. 4: Appendix
###Code
#@title 4.1 Load settings (optional)
#@markdown copy the `SETTINGS:` output from the **Parameters** cell (tripple click to select the whole
#@markdown line from `{'scenes'...` to `}`) and paste them in a note to save them for later.
#@markdown Paste them here in the future to load those settings again. Running this cell with blank settings won't do anything.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import *
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import json, random
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
settings = ""#@param{type:"string"}
#@markdown Check `random_seed` to overwrite the seed from the settings with a random one for some variation.
random_seed = False #@param{type:"boolean"}
if settings != '':
params = load_settings(settings, random_seed)
###Output
_____no_output_____
###Markdown
PyTTI-Tools Colab NotebookIf you are using PyTTI-tools from a local jupyter server, you might have a better experience with the "_local" notebook: https://github.com/pytti-tools/pytti-notebook/blob/main/pyttitools-PYTTI_local.ipynbIf you are planning to use google colab with the "local runtime" option: this is still the notebook you want. A very brief history of this notebookThe tools and techniques below were pioneered in 2021 by a diverse and distributed collection of amazingly talented ML practitioners, researchers, and artists. The short version of this history is that Katherine Crowson ([@RiversHaveWings](https://twitter.com/RiversHaveWings)) published a notebook inspired by work done by [@advadnoun](https://twitter.com/advadnoun). Katherine's notebook spawned a litany of variants, each with their own twist on the technique or adding a feature to someone else's work. Henry Rachootin ([@sportsracer48](https://twitter.com/sportsracer48)) collected several of the most interesting notebooks and stuck the important bits together with bublegum and scotch tape. Thus was born PyTTI, and there was much rejoicing in sportsracer48's patreon, where it was shared in closed beta for several months. David Marx ([@DigThatData](https://twitter.com/DigThatData)) offered to help tidy up the mess, and sportsracer48 encouraged him to run wild with it. David's contributions snowballed into [PyTTI-Tools](https://github.com/pytti-tools), the engine this notebook sits on top of!If you would like to contribute, receive support, or even just suggest an improvement to the documentation, our issue tracker can be found here: https://github.com/pytti-tools/pytti-core/issues InstructionsDetailed documentation can be found here: https://pytti-tools.github.io/pytti-book/intro.html* Syntax for text prompts and scenes: https://pytti-tools.github.io/pytti-book/SceneDSL.html* Descriptions of all settings: https://pytti-tools.github.io/pytti-book/Settings.html Step 1: SetupRun the cells in this section once for each runtime, or after a factory reset.
###Code
# This cell should only be run once
drive_mounted = False
gdrive_fpath = '.'
#@title 1.1 Mount google drive (optional)
#@markdown Mounting your drive is optional but recommended. You can even restore from google randomly
#@markdown kicking you out if you mount your drive.
from pathlib import Path
mount_gdrive = False # @param{type:"boolean"}
if mount_gdrive and not drive_mounted:
from google.colab import drive
gdrive_mountpoint = '/content/drive/' #@param{type:"string"}
gdrive_subdirectory = 'MyDrive/pytti_tools' #@param{type:"string"}
gdrive_fpath = str(Path(gdrive_mountpoint) / gdrive_subdirectory)
try:
drive.mount(gdrive_mountpoint, force_remount = True)
!mkdir -p {gdrive_fpath}
%cd {gdrive_fpath}
drive_mounted = True
except OSError:
print(
"\n\n-----[PYTTI-TOOLS]-------\n\n"
"If you received a scary OSError and your drive"
" was already mounted, ignore it."
"\n\n-----[PYTTI-TOOLS]-------\n\n"
)
raise
#@title 1.2 NVIDIA-SMI (optional)
#@markdown View information about your runtime GPU.
#@markdown Google will connect you to an industrial strength GPU, which is needed to run
#@markdown this notebook. You can also disable error checking on your GPU to get some
#@markdown more VRAM, at a marginal cost to stability. You will have to restart the runtime after
#@markdown disabling it.
enable_error_checking = False#@param {type:"boolean"}
if enable_error_checking:
!nvidia-smi
else:
!nvidia-smi
!nvidia-smi -i 0 -e 0
#@title 1.3 Install everything else
#@markdown Run this cell on a fresh runtime to install the libraries and modules.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
def flush_reqs():
!rm -r pytti-core
def install_everything():
if path_exists('./pytti-core'):
try:
flush_reqs()
except Exception as ex:
logger.warning(
str(ex)
)
logger.warning(
"A `pytti` folder already exists and could not be deleted."
"If you encounter problems, try deleting that folder and trying again."
"Please report this and any other issues here: "
"https://github.com/pytti-tools/pytti-notebook/issues/new",
exc_info=True)
!git clone --recurse-submodules -j8 https://github.com/wizardhead/pytti-core
!pip install kornia pytorch-lightning transformers
!pip install jupyter loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install --upgrade gdown
!pip install ./pytti-core/vendor/AdaBins
!pip install ./pytti-core/vendor/CLIP
!pip install ./pytti-core/vendor/GMA
!pip install ./pytti-core/vendor/taming-transformers
!pip install ./pytti-core
!mkdir -p images_out
!mkdir -p videos
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
try:
from adjustText import adjust_text
import pytti, torch
everything_installed = True
except ModuleNotFoundError:
everything_installed = False
force_install = False #@param{type:"boolean"}
if not everything_installed or force_install:
install_everything()
elif everything_installed:
from pytti.Notebook import change_tqdm_color
change_tqdm_color()
###Output
/Users/brendan/src/github.com/wizardhead/pytti-notebook
Cloning into 'pytti-core'...
fatal: remote error:
The unauthenticated git protocol on port 9418 is no longer supported.
Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information.
Collecting kornia
Downloading kornia-0.6.4-py2.py3-none-any.whl (493 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m493.4/493.4 KB[0m [31m2.2 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hCollecting pytorch-lightning
Downloading pytorch_lightning-1.5.10-py3-none-any.whl (527 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m527.7/527.7 KB[0m [31m3.8 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hCollecting transformers
Downloading transformers-4.17.0-py3-none-any.whl (3.8 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m3.8/3.8 MB[0m [31m6.6 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hCollecting torch>=1.8.1
Downloading torch-1.11.0-cp310-none-macosx_11_0_arm64.whl (43.1 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m43.1/43.1 MB[0m [31m9.7 MB/s[0m eta [36m0:00:00[0m:00:01[0m00:01[0m
[?25hRequirement already satisfied: packaging in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from kornia) (21.3)
Requirement already satisfied: tensorboard>=2.2.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from pytorch-lightning) (2.8.0)
Collecting typing-extensions
Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB)
Requirement already satisfied: numpy>=1.17.2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from pytorch-lightning) (1.22.3)
Collecting tqdm>=4.41.0
Downloading tqdm-4.63.0-py2.py3-none-any.whl (76 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m76.6/76.6 KB[0m [31m2.6 MB/s[0m eta [36m0:00:00[0m
[?25hCollecting torchmetrics>=0.4.1
Downloading torchmetrics-0.7.2-py3-none-any.whl (397 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m397.2/397.2 KB[0m [31m6.7 MB/s[0m eta [36m0:00:00[0m00:01[0m
[?25hCollecting pyDeprecate==0.3.1
Using cached pyDeprecate-0.3.1-py3-none-any.whl (10 kB)
Collecting setuptools==59.5.0
Downloading setuptools-59.5.0-py3-none-any.whl (952 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m952.4/952.4 KB[0m [31m9.5 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hCollecting PyYAML>=5.1
Downloading PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m174.0/174.0 KB[0m [31m4.9 MB/s[0m eta [36m0:00:00[0m
[?25hCollecting future>=0.17.1
Using cached future-0.18.2.tar.gz (829 kB)
Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting fsspec[http]!=2021.06.0,>=2021.05.0
Downloading fsspec-2022.2.0-py3-none-any.whl (134 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m134.9/134.9 KB[0m [31m4.0 MB/s[0m eta [36m0:00:00[0m
[?25hCollecting huggingface-hub<1.0,>=0.1.0
Downloading huggingface_hub-0.4.0-py3-none-any.whl (67 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m67.0/67.0 KB[0m [31m2.3 MB/s[0m eta [36m0:00:00[0m
[?25hRequirement already satisfied: requests in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from transformers) (2.27.1)
Collecting tokenizers!=0.11.3,>=0.11.1
Downloading tokenizers-0.11.6-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m3.4/3.4 MB[0m [31m9.3 MB/s[0m eta [36m0:00:00[0m:00:01[0m00:01[0m
[?25hCollecting sacremoses
Downloading sacremoses-0.0.49-py3-none-any.whl (895 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m895.2/895.2 KB[0m [31m9.1 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hCollecting regex!=2019.12.17
Downloading regex-2022.3.15-cp310-cp310-macosx_11_0_arm64.whl (281 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m281.8/281.8 KB[0m [31m5.8 MB/s[0m eta [36m0:00:00[0m00:01[0m
[?25hCollecting filelock
Downloading filelock-3.6.0-py3-none-any.whl (10.0 kB)
Collecting aiohttp
Downloading aiohttp-3.8.1-cp310-cp310-macosx_11_0_arm64.whl (552 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m552.5/552.5 KB[0m [31m8.3 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from packaging->kornia) (3.0.7)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (1.8.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (0.6.1)
Requirement already satisfied: absl-py>=0.4 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (1.0.0)
Requirement already satisfied: wheel>=0.26 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (0.37.1)
Requirement already satisfied: protobuf>=3.6.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (3.19.4)
Requirement already satisfied: grpcio>=1.24.3 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (1.44.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (0.4.6)
Requirement already satisfied: markdown>=2.6.8 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (3.3.6)
Requirement already satisfied: google-auth<3,>=1.6.3 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (2.6.2)
Requirement already satisfied: werkzeug>=0.11.15 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from tensorboard>=2.2.0->pytorch-lightning) (2.0.3)
Requirement already satisfied: certifi>=2017.4.17 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests->transformers) (2021.10.8)
Requirement already satisfied: charset-normalizer~=2.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests->transformers) (2.0.12)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests->transformers) (1.26.9)
Requirement already satisfied: idna<4,>=2.5 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests->transformers) (3.3)
Requirement already satisfied: six in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from sacremoses->transformers) (1.16.0)
Requirement already satisfied: click in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from sacremoses->transformers) (8.0.3)
Collecting joblib
Downloading joblib-1.1.0-py2.py3-none-any.whl (306 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m307.0/307.0 KB[0m [31m5.5 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hRequirement already satisfied: pyasn1-modules>=0.2.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch-lightning) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch-lightning) (4.8)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch-lightning) (5.0.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->pytorch-lightning) (1.3.1)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.2.0-py3-none-any.whl (8.2 kB)
Collecting multidict<7.0,>=4.5
Downloading multidict-6.0.2-cp310-cp310-macosx_11_0_arm64.whl (29 kB)
Collecting async-timeout<5.0,>=4.0.0a3
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting yarl<2.0,>=1.0
Downloading yarl-1.7.2-cp310-cp310-macosx_11_0_arm64.whl (118 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m118.1/118.1 KB[0m [31m2.6 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hRequirement already satisfied: attrs>=17.3.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from aiohttp->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch-lightning) (21.4.0)
Collecting frozenlist>=1.1.1
Downloading frozenlist-1.3.0-cp310-cp310-macosx_11_0_arm64.whl (34 kB)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.2.0->pytorch-lightning) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.0->pytorch-lightning) (3.2.0)
Building wheels for collected packages: future
Building wheel for future (setup.py) ... [?25ldone
[?25h Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491070 sha256=ac50d85f5275018bb01bbd4ab2ba10161e764c0bdec70cfceee64122ca9216fc
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/22/73/06/557dc4f4ef68179b9d763930d6eec26b88ed7c389b19588a1c
Successfully built future
Installing collected packages: tokenizers, typing-extensions, tqdm, setuptools, regex, PyYAML, pyDeprecate, multidict, joblib, future, fsspec, frozenlist, filelock, async-timeout, yarl, torch, sacremoses, huggingface-hub, aiosignal, transformers, torchmetrics, kornia, aiohttp, pytorch-lightning
Attempting uninstall: setuptools
Found existing installation: setuptools 57.4.0
Uninstalling setuptools-57.4.0:
Successfully uninstalled setuptools-57.4.0
Successfully installed PyYAML-6.0 aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.2 filelock-3.6.0 frozenlist-1.3.0 fsspec-2022.2.0 future-0.18.2 huggingface-hub-0.4.0 joblib-1.1.0 kornia-0.6.4 multidict-6.0.2 pyDeprecate-0.3.1 pytorch-lightning-1.5.10 regex-2022.3.15 sacremoses-0.0.49 setuptools-59.5.0 tokenizers-0.11.6 torch-1.11.0 torchmetrics-0.7.2 tqdm-4.63.0 transformers-4.17.0 typing-extensions-4.1.1 yarl-1.7.2
Requirement already satisfied: jupyter in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (1.0.0)
Collecting loguru
Downloading loguru-0.6.0-py3-none-any.whl (58 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m58.3/58.3 KB[0m [31m798.0 kB/s[0m eta [36m0:00:00[0m [36m0:00:01[0m
[?25hCollecting einops
Downloading einops-0.4.1-py3-none-any.whl (28 kB)
Collecting PyGLM
Downloading PyGLM-2.5.7-cp310-cp310-macosx_11_0_arm64.whl (1.3 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.3/1.3 MB[0m [31m6.0 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hCollecting ftfy
Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m53.1/53.1 KB[0m [31m1.5 MB/s[0m eta [36m0:00:00[0m
[?25hRequirement already satisfied: regex in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (2022.3.15)
Requirement already satisfied: tqdm in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (4.63.0)
Collecting hydra-core
Downloading hydra_core-1.1.1-py3-none-any.whl (145 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m145.8/145.8 KB[0m [31m4.4 MB/s[0m eta [36m0:00:00[0m
[?25hCollecting exrex
Downloading exrex-0.10.5.tar.gz (4.8 kB)
Preparing metadata (setup.py) ... [?25ldone
[?25hRequirement already satisfied: nbconvert in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (6.4.1)
Requirement already satisfied: notebook in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (6.4.8)
Requirement already satisfied: ipykernel in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (6.9.0)
Requirement already satisfied: jupyter-console in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (6.4.0)
Requirement already satisfied: qtconsole in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (5.2.2)
Requirement already satisfied: ipywidgets in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter) (7.6.5)
Requirement already satisfied: wcwidth>=0.2.5 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ftfy) (0.2.5)
Collecting antlr4-python3-runtime==4.8
Using cached antlr4-python3-runtime-4.8.tar.gz (112 kB)
Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting omegaconf==2.1.*
Using cached omegaconf-2.1.1-py3-none-any.whl (74 kB)
Requirement already satisfied: PyYAML>=5.1.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from omegaconf==2.1.*->hydra-core) (6.0)
Requirement already satisfied: matplotlib-inline<0.2.0,>=0.1.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (0.1.3)
Requirement already satisfied: appnope in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (0.1.2)
Requirement already satisfied: jupyter-client<8.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (7.1.2)
Requirement already satisfied: ipython>=7.23.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (8.0.1)
Requirement already satisfied: traitlets<6.0,>=5.1.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (5.1.1)
Requirement already satisfied: nest-asyncio in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (1.5.4)
Requirement already satisfied: debugpy<2.0,>=1.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (1.5.1)
Requirement already satisfied: tornado<7.0,>=4.2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipykernel->jupyter) (6.1)
Requirement already satisfied: nbformat>=4.2.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipywidgets->jupyter) (5.1.3)
Requirement already satisfied: ipython-genutils~=0.2.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipywidgets->jupyter) (0.2.0)
Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipywidgets->jupyter) (1.0.2)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipywidgets->jupyter) (3.5.2)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter-console->jupyter) (3.0.27)
Requirement already satisfied: pygments in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter-console->jupyter) (2.11.2)
Requirement already satisfied: entrypoints>=0.2.2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.4)
Requirement already satisfied: defusedxml in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.7.1)
Requirement already satisfied: nbclient<0.6.0,>=0.5.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.5.10)
Requirement already satisfied: jupyter-core in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (4.9.1)
Requirement already satisfied: mistune<2,>=0.8.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (1.5.0)
Requirement already satisfied: testpath in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.5.0)
Requirement already satisfied: bleach in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (4.1.0)
Requirement already satisfied: jinja2>=2.4 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (3.0.3)
Requirement already satisfied: jupyterlab-pygments in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbconvert->jupyter) (0.1.2)
Requirement already satisfied: pyzmq>=17 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from notebook->jupyter) (22.3.0)
Requirement already satisfied: argon2-cffi in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from notebook->jupyter) (21.3.0)
Requirement already satisfied: Send2Trash>=1.8.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from notebook->jupyter) (1.8.0)
Requirement already satisfied: terminado>=0.8.3 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from notebook->jupyter) (0.13.1)
Requirement already satisfied: prometheus-client in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from notebook->jupyter) (0.13.1)
Requirement already satisfied: qtpy in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from qtconsole->jupyter) (2.0.1)
Requirement already satisfied: stack-data in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (0.1.4)
Requirement already satisfied: black in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (22.1.0)
Requirement already satisfied: pickleshare in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (0.7.5)
Requirement already satisfied: pexpect>4.3 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (4.8.0)
Requirement already satisfied: setuptools>=18.5 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (59.5.0)
Requirement already satisfied: decorator in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (5.1.1)
Requirement already satisfied: backcall in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (0.2.0)
Requirement already satisfied: jedi>=0.16 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from ipython>=7.23.1->ipykernel->jupyter) (0.18.1)
Requirement already satisfied: MarkupSafe>=2.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jinja2>=2.4->nbconvert->jupyter) (2.0.1)
Requirement already satisfied: python-dateutil>=2.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jupyter-client<8.0->ipykernel->jupyter) (2.8.2)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from nbformat>=4.2.0->ipywidgets->jupyter) (4.4.0)
Requirement already satisfied: ptyprocess in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from terminado>=0.8.3->notebook->jupyter) (0.7.0)
Requirement already satisfied: argon2-cffi-bindings in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from argon2-cffi->notebook->jupyter) (21.2.0)
Requirement already satisfied: webencodings in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from bleach->nbconvert->jupyter) (0.5.1)
Requirement already satisfied: packaging in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from bleach->nbconvert->jupyter) (21.3)
Requirement already satisfied: six>=1.9.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from bleach->nbconvert->jupyter) (1.16.0)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel->jupyter) (0.8.3)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter) (0.18.1)
Requirement already satisfied: attrs>=17.4.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter) (21.4.0)
Requirement already satisfied: cffi>=1.0.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from argon2-cffi-bindings->argon2-cffi->notebook->jupyter) (1.15.0)
Requirement already satisfied: platformdirs>=2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from black->ipython>=7.23.1->ipykernel->jupyter) (2.4.1)
Requirement already satisfied: click>=8.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from black->ipython>=7.23.1->ipykernel->jupyter) (8.0.3)
Requirement already satisfied: mypy-extensions>=0.4.3 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from black->ipython>=7.23.1->ipykernel->jupyter) (0.4.3)
Requirement already satisfied: pathspec>=0.9.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from black->ipython>=7.23.1->ipykernel->jupyter) (0.9.0)
Requirement already satisfied: tomli>=1.1.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from black->ipython>=7.23.1->ipykernel->jupyter) (2.0.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from packaging->bleach->nbconvert->jupyter) (3.0.7)
Requirement already satisfied: pure-eval in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter) (0.2.2)
Requirement already satisfied: executing in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter) (0.8.2)
Requirement already satisfied: asttokens in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter) (2.0.5)
Requirement already satisfied: pycparser in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook->jupyter) (2.21)
Building wheels for collected packages: antlr4-python3-runtime, exrex
Building wheel for antlr4-python3-runtime (setup.py) ... [?25ldone
[?25h Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.8-py3-none-any.whl size=141230 sha256=e502dcf0a886b6ee8c6cd76a452c292fd1f1e4d7790e5931636c1a3b8f78fb64
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/a7/20/bd/e1477d664f22d99989fd28ee1a43d6633dddb5cb9e801350d5
Building wheel for exrex (setup.py) ... [?25ldone
[?25h Created wheel for exrex: filename=exrex-0.10.5-py3-none-any.whl size=9174 sha256=cc8a34218ce3b9938e16ce2141e6442e9e163db4a6876e78006bff01e896a2ec
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/6b/a1/94/2a02414815fa3ac91db486a558c770451ff6bfc39296469599
Successfully built antlr4-python3-runtime exrex
Installing collected packages: PyGLM, exrex, einops, antlr4-python3-runtime, omegaconf, loguru, ftfy, hydra-core
Successfully installed PyGLM-2.5.7 antlr4-python3-runtime-4.8 einops-0.4.1 exrex-0.10.5 ftfy-6.1.1 hydra-core-1.1.1 loguru-0.6.0 omegaconf-2.1.1
Collecting seaborn
Downloading seaborn-0.11.2-py3-none-any.whl (292 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m292.8/292.8 KB[0m [31m1.3 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hCollecting adjustText
Downloading adjustText-0.7.3.tar.gz (7.5 kB)
Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting bunch
Downloading bunch-1.0.1.zip (11 kB)
Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting matplotlib-label-lines
Downloading matplotlib_label_lines-0.5.1-py3-none-any.whl (12 kB)
Collecting matplotlib>=2.2
Downloading matplotlib-3.5.1-cp310-cp310-macosx_11_0_arm64.whl (7.2 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m7.2/7.2 MB[0m [31m7.5 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hRequirement already satisfied: pandas>=0.23 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from seaborn) (1.4.1)
Collecting scipy>=1.0
Downloading scipy-1.8.0-cp310-cp310-macosx_12_0_arm64.whl (28.7 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m28.7/28.7 MB[0m [31m10.0 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hRequirement already satisfied: numpy>=1.15 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from seaborn) (1.22.3)
Collecting more-itertools
Downloading more_itertools-8.12.0-py3-none-any.whl (54 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m54.3/54.3 KB[0m [31m1.9 MB/s[0m eta [36m0:00:00[0m
[?25hRequirement already satisfied: python-dateutil>=2.7 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from matplotlib>=2.2->seaborn) (2.8.2)
Collecting fonttools>=4.22.0
Downloading fonttools-4.31.1-py3-none-any.whl (899 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m899.4/899.4 KB[0m [31m9.3 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hRequirement already satisfied: pyparsing>=2.2.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from matplotlib>=2.2->seaborn) (3.0.7)
Collecting cycler>=0.10
Downloading cycler-0.11.0-py3-none-any.whl (6.4 kB)
Requirement already satisfied: packaging>=20.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from matplotlib>=2.2->seaborn) (21.3)
Collecting kiwisolver>=1.0.1
Downloading kiwisolver-1.4.0-cp310-cp310-macosx_11_0_arm64.whl (59 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m59.3/59.3 KB[0m [31m1.8 MB/s[0m eta [36m0:00:00[0m
[?25hCollecting pillow>=6.2.0
Downloading Pillow-9.0.1-1-cp310-cp310-macosx_11_0_arm64.whl (2.7 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m2.7/2.7 MB[0m [31m9.7 MB/s[0m eta [36m0:00:00[0m:00:01[0m00:01[0m
[?25hRequirement already satisfied: pytz>=2020.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from pandas>=0.23->seaborn) (2022.1)
Requirement already satisfied: six>=1.5 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib>=2.2->seaborn) (1.16.0)
Building wheels for collected packages: adjustText, bunch
Building wheel for adjustText (setup.py) ... [?25ldone
[?25h Created wheel for adjustText: filename=adjustText-0.7.3-py3-none-any.whl size=7097 sha256=33567743906d6f48408995eccdd7313750ad4c715441354dcd9d8a37f361c36b
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/08/17/ca/9b56027427d0a46e5696312ad2bc4e8a47620d31a25c473d44
Building wheel for bunch (setup.py) ... [?25ldone
[?25h Created wheel for bunch: filename=bunch-1.0.1-py3-none-any.whl size=7094 sha256=c0b185fa9743047837c14d9afe7a221c42f1ea86cde1a1e1f4366adddcba02c9
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/b8/a9/fe/1ab6d927c80327a67fddb03d620f77b8168c0f6caaac3a5271
Successfully built adjustText bunch
Installing collected packages: bunch, scipy, pillow, more-itertools, kiwisolver, fonttools, cycler, matplotlib, seaborn, matplotlib-label-lines, adjustText
Successfully installed adjustText-0.7.3 bunch-1.0.1 cycler-0.11.0 fonttools-4.31.1 kiwisolver-1.4.0 matplotlib-3.5.1 matplotlib-label-lines-0.5.1 more-itertools-8.12.0 pillow-9.0.1 scipy-1.8.0 seaborn-0.11.2
Collecting gdown
Downloading gdown-4.4.0.tar.gz (14 kB)
Installing build dependencies ... [?25ldone
[?25h Getting requirements to build wheel ... [?25ldone
[?25h Preparing metadata (pyproject.toml) ... [?25ldone
[?25hRequirement already satisfied: requests[socks] in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from gdown) (2.27.1)
Collecting beautifulsoup4
Downloading beautifulsoup4-4.10.0-py3-none-any.whl (97 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m97.4/97.4 KB[0m [31m1.0 MB/s[0m eta [36m0:00:00[0ma [36m0:00:01[0m
[?25hRequirement already satisfied: tqdm in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from gdown) (4.63.0)
Requirement already satisfied: six in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from gdown) (1.16.0)
Requirement already satisfied: filelock in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from gdown) (3.6.0)
Collecting soupsieve>1.2
Downloading soupsieve-2.3.1-py3-none-any.whl (37 kB)
Requirement already satisfied: certifi>=2017.4.17 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests[socks]->gdown) (2021.10.8)
Requirement already satisfied: idna<4,>=2.5 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests[socks]->gdown) (3.3)
Requirement already satisfied: charset-normalizer~=2.0.0 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests[socks]->gdown) (2.0.12)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/brendan/.pyenv/versions/3.10.0/lib/python3.10/site-packages (from requests[socks]->gdown) (1.26.9)
Collecting PySocks!=1.5.7,>=1.5.6
Downloading PySocks-1.7.1-py3-none-any.whl (16 kB)
Building wheels for collected packages: gdown
Building wheel for gdown (pyproject.toml) ... [?25ldone
[?25h Created wheel for gdown: filename=gdown-4.4.0-py3-none-any.whl size=14775 sha256=43e65047990d2dc82ad653ebadd00010e21ca9a5804d653f08216fa4bd5a4a20
Stored in directory: /Users/brendan/Library/Caches/pip/wheels/03/0b/3f/6ddf67a417a5b400b213b0bb772a50276c199a386b12c06bfc
Successfully built gdown
Installing collected packages: soupsieve, PySocks, beautifulsoup4, gdown
Successfully installed PySocks-1.7.1 beautifulsoup4-4.10.0 gdown-4.4.0 soupsieve-2.3.1
[31mERROR: Invalid requirement: './pytti-core/vendor/AdaBins'
Hint: It looks like a path. File './pytti-core/vendor/AdaBins' does not exist.[0m[31m
[0m[31mERROR: Invalid requirement: './pytti-core/vendor/CLIP'
Hint: It looks like a path. File './pytti-core/vendor/CLIP' does not exist.[0m[31m
[0m[31mERROR: Invalid requirement: './pytti-core/vendor/GMA'
Hint: It looks like a path. File './pytti-core/vendor/GMA' does not exist.[0m[31m
[0m[31mERROR: Invalid requirement: './pytti-core/vendor/taming-transformers'
Hint: It looks like a path. File './pytti-core/vendor/taming-transformers' does not exist.[0m[31m
[0m[31mERROR: Invalid requirement: './pytti-core'
Hint: It looks like a path. File './pytti-core' does not exist.[0m[31m
[0m
###Markdown
Step 2: Configure ExperimentEdit the parameters, or load saved parameters, then run the model.* https://pytti-tools.github.io/pytti-book/SceneDSL.html* https://pytti-tools.github.io/pytti-book/Settings.html
###Code
#@title #2.1 Parameters:
#@markdown ---
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import glob, json, random, re, math
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
#these are used to make the defaults look pretty
model_default = None
random_seed = None
all = math.inf
derive_from_init_aspect_ratio = -1
def define_parameters():
locals_before = locals().copy()
#@markdown ###Prompts:
scenes = "deep space habitation ring made of glass | galactic nebula | wow! space is full of fractal creatures darting around everywhere like fireflies"#@param{type:"string"}
scene_prefix = "astrophotography #pixelart | image credit nasa | space full of cybernetic neon:3_galactic nebula | isometric pixelart by Sachin Teng | "#@param{type:"string"}
scene_suffix = "| satellite image:-1:-.95 | text:-1:-.95 | anime:-1:-.95 | watermark:-1:-.95 | backyard telescope:-1:-.95 | map:-1:-.95"#@param{type:"string"}
interpolation_steps = 0#@param{type:"number"}
steps_per_scene = 60100#@param{type:"raw"}
#@markdown ---
#@markdown ###Image Prompts:
direct_image_prompts = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Initial image:
init_image = ""#@param{type:"string"}
direct_init_weight = ""#@param{type:"string"}
semantic_init_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Image:
#@markdown Use `image_model` to select how the model will encode the image
image_model = "Limited Palette" #@param ["VQGAN", "Limited Palette", "Unlimited Palette"]
#@markdown image_model | description | strengths | weaknesses
#@markdown --- | -- | -- | --
#@markdown VQGAN | classic VQGAN image | smooth images | limited datasets, slow, VRAM intesnsive
#@markdown Limited Palette | pytti differentiable palette | fast, VRAM scales with `palettes` | pixel images
#@markdown Unlimited Palette | simple RGB optimization | fast, VRAM efficient | pixel images
#@markdown The output image resolution will be `width` $\times$ `pixel_size` by height $\times$ `pixel_size` pixels.
#@markdown The easiest way to run out of VRAM is to select `image_model` VQGAN without reducing
#@markdown `pixel_size` to $1$.
#@markdown For `animation_mode: 3D` the minimum resoultion is about 450 by 400 pixels.
width = 180#@param {type:"raw"}
height = 112#@param {type:"raw"}
pixel_size = 4#@param{type:"number"}
smoothing_weight = 0.02#@param{type:"number"}
#@markdown `VQGAN` specific settings:
vqgan_model = "sflckr" #@param ["imagenet", "coco", "wikiart", "sflckr", "openimages"]
#@markdown `Limited Palette` specific settings:
random_initial_palette = False#@param{type:"boolean"}
palette_size = 6#@param{type:"number"}
palettes = 9#@param{type:"number"}
gamma = 1#@param{type:"number"}
hdr_weight = 0.01#@param{type:"number"}
palette_normalization_weight = 0.2#@param{type:"number"}
show_palette = False #@param{type:"boolean"}
target_palette = ""#@param{type:"string"}
lock_palette = False #@param{type:"boolean"}
#@markdown ---
#@markdown ###Animation:
animation_mode = "3D" #@param ["off","2D", "3D", "Video Source"]
sampling_mode = "bicubic" #@param ["bilinear","nearest","bicubic"]
infill_mode = "wrap" #@param ["mirror","wrap","black","smear"]
pre_animation_steps = 100#@param{type:"number"}
steps_per_frame = 50#@param{type:"number"}
frames_per_second = 12#@param{type:"number"}
#@markdown ---
#@markdown ###Stabilization Weights:
direct_stabilization_weight = ""#@param{type:"string"}
semantic_stabilization_weight = ""#@param{type:"string"}
depth_stabilization_weight = ""#@param{type:"string"}
edge_stabilization_weight = ""#@param{type:"string"}
#@markdown `flow_stabilization_weight` is used for `animation_mode: 3D` and `Video Source`
flow_stabilization_weight = ""#@param{type:"string"}
#@markdown ---
#@markdown ###Video Tracking:
#@markdown Only for `animation_mode: Video Source`.
video_path = ""#@param{type:"string"}
frame_stride = 1#@param{type:"number"}
reencode_each_frame = True #@param{type:"boolean"}
flow_long_term_samples = 1#@param{type:"number"}
#@markdown ---
#@markdown ###Image Motion:
translate_x = "-1700*sin(radians(1.5))" #@param{type:"string"}
translate_y = "0" #@param{type:"string"}
#@markdown `..._3d` is only used in 3D mode.
translate_z_3d = "(50+10*t)*sin(t/10*pi)**2" #@param{type:"string"}
#@markdown `rotate_3d` *must* be a `[w,x,y,z]` rotation (unit) quaternion. Use `rotate_3d: [1,0,0,0]` for no rotation.
#@markdown [Learn more about rotation quaternions here](https://eater.net/quaternions).
rotate_3d = "[cos(radians(1.5)), 0, -sin(radians(1.5))/sqrt(2), sin(radians(1.5))/sqrt(2)]"#@param{type:"string"}
#@markdown `..._2d` is only used in 2D mode.
rotate_2d = "5" #@param{type:"string"}
zoom_x_2d = "0" #@param{type:"string"}
zoom_y_2d = "0" #@param{type:"string"}
#@markdown 3D camera (only used in 3D mode):
lock_camera = True#@param{type:"boolean"}
field_of_view = 60#@param{type:"number"}
near_plane = 1#@param{type:"number"}
far_plane = 10000#@param{type:"number"}
#@markdown ---
#@markdown ###Output:
file_namespace = "default"#@param{type:"string"}
if file_namespace == '':
file_namespace = 'out'
allow_overwrite = False#@param{type:"boolean"}
base_name = file_namespace
if not allow_overwrite and path_exists(f'images_out/{file_namespace}'):
_, i = get_last_file(f'images_out/{file_namespace}',
f'^(?P<pre>{re.escape(file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
if i == 0:
print(f"WARNING: file_namespace {file_namespace} already has images from run 0")
elif i is not None:
print(f"WARNING: file_namespace {file_namespace} already has images from runs 0 through {i}")
elif glob.glob(f'images_out/{file_namespace}/{base_name}_*.png'):
print(f"WARNING: file_namespace {file_namespace} has images which will be overwritten")
try:
del i
del _
except NameError:
pass
del base_name
display_every = steps_per_frame #@param{type:"raw"}
clear_every = 0 #@param{type:"raw"}
display_scale = 1#@param{type:"number"}
save_every = steps_per_frame #@param{type:"raw"}
backups = 2**(flow_long_term_samples+1)+1#this is used for video transfer, so don't lower it if that's what you're doing#@param {type:"raw"}
show_graphs = False #@param{type:"boolean"}
approximate_vram_usage = False#@param{type:"boolean"}
#@markdown ---
#@markdown ###Model:
#@markdown Quality settings from Dribnet's CLIPIT (https://github.com/dribnet/clipit).
#@markdown Selecting too many will use up all your VRAM and slow down the model.
#@markdown I usually use ViTB32, ViTB16, and RN50 if I get a A100, otherwise I just use ViT32B.
#@markdown quality | CLIP models
#@markdown --- | --
#@markdown draft | ViTB32
#@markdown normal | ViTB32, ViTB16
#@markdown high | ViTB32, ViTB16, RN50
#@markdown best | ViTB32, ViTB16, RN50x4
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
RN50 = False #@param{type:"boolean"}
RN50x4 = False #@param{type:"boolean"}
ViTL14 = False #@param{type:"boolean"}
RN101 = False #@param{type:"boolean"}
RN50x16 = False #@param{type:"boolean"}
RN50x64 = False #@param{type:"boolean"}
#@markdown the default learning rate is `0.1` for all the VQGAN models
#@markdown except openimages, which is `0.15`. For the palette modes the
#@markdown default is `0.02`.
learning_rate = model_default#@param{type:"raw"}
reset_lr_each_frame = True#@param{type:"boolean"}
seed = random_seed #@param{type:"raw"}
#@markdown **Cutouts**:
#@markdown [Cutouts are how CLIP sees the image.](https://twitter.com/remi_durant/status/1460607677801897990)
cutouts = 40#@param{type:"number"}
cut_pow = 2#@param {type:"number"}
cutout_border = .25#@param {type:"number"}
gradient_accumulation_steps = 1 #@param {type:"number"}
#@markdown NOTE: prompt masks (`promt:weight_[mask.png]`) will not work right on '`wrap`' or '`mirror`' mode.
border_mode = "clamp" #@param ["clamp","mirror","wrap","black","smear"]
models_parent_dir = '.'
if seed is None:
seed = random.randint(-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff)
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
params = Bunch(define_parameters())
print("SETTINGS:")
print(json.dumps(params))
#@title 2.2 Load settings (optional)
#@markdown copy the `SETTINGS:` output from the **Parameters** cell (tripple click to select the whole
#@markdown line from `{'scenes'...` to `}`) and paste them in a note to save them for later.
#@markdown Paste them here in the future to load those settings again. Running this cell with blank settings won't do anything.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import *
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
import json, random
try:
from bunch import Bunch
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
settings = ""#@param{type:"string"}
#@markdown Check `random_seed` to overwrite the seed from the settings with a random one for some variation.
random_seed = False #@param{type:"boolean"}
if settings != '':
params = load_settings(settings, random_seed)
from pytti.workhorse import TB_LOGDIR
%load_ext tensorboard
%tensorboard --logdir $TB_LOGDIR
###Output
_____no_output_____
###Markdown
It is common for users to experience issues starting their first run. In particular, you may see an error saying something like "Access Denied" and showing you some URL links. This is caused by the google drive link for one of the models getting "hugged to death". You can still access the model, but google won't let you do it programmatically. Please follow these steps to get around the issue:1. Visit either of the two URLs you see in your browser to download the file `AdaBins_nyu.pt` locally2. Create a new folder in colab named `pretrained` (check the left sidebar for a file browser)3. Upload `AdaBins_nyu.pt` to the `pretrained` folder. You should be able to just drag-and-drop the file onto the folder.4. Run the following code cell after the upload has completed to tell PyTTI where to find AdaBinsYou should now be able to run image generation without issues.
###Code
%%sh
ADABINS_SRC=./pretrained/AdaBins_nyu.pt
ADABINS_DIR=~/.cache/adabins
ADABINS_TGT=$ADABINS_DIR/AdaBins_nyu.pt
if [ -f "$ADABINS_SRC" ]; then
mkdir -p $ADABINS_DIR/
ln $ADABINS_SRC $ADABINS_TGT
fi
#@title 2.3 Run it!
from pytti.workhorse import _main as render_frames
from omegaconf import OmegaConf
cfg = OmegaConf.create(dict(params))
# function wraps step 2.3 of the original p5 notebook
render_frames(cfg)
###Output
_____no_output_____
###Markdown
Step 3: Render videoYou can dowload from the notebook, but it's faster to download from your drive.
###Code
#@title 3.1 Render video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest#@param{type:"raw"}
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
tqdm.write(f'Generating video from {params.file_namespace}/{base_name}_*.png')
all_frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
all_frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
print(f'found {len(all_frames)} frames matching images_out/{params.file_namespace}/{base_name}_*.png')
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.1 Render video (concatenate all runs)
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
change_tqdm_color()
from tqdm.notebook import tqdm
import numpy as np
from os.path import exists as path_exists
from subprocess import Popen, PIPE
from PIL import Image, ImageFile
from os.path import splitext as split_file
import glob
from pytti.Notebook import get_last_file
ImageFile.LOAD_TRUNCATED_IMAGES = True
try:
params
except NameError:
raise RuntimeError("ERROR: no parameters. Please run parameters (step 2.1).")
if not path_exists(f"images_out/{params.file_namespace}"):
if path_exists(f"/content/drive/MyDrive"):
raise RuntimeError(f"ERROR: file_namespace: {params.file_namespace} does not exist.")
else:
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: file_namespace: {params.file_namespace} does not exist.")
#@markdown The first run executed in `file_namespace` is number $0$, the second is number $1$, etc.
latest = -1
run_number = latest
if run_number == -1:
_, i = get_last_file(f'images_out/{params.file_namespace}',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?_1\\.png)$')
run_number = i
all_frames = []
for i in range(run_number+1):
base_name = params.file_namespace if i == 0 else (params.file_namespace+f"({i})")
frames = glob.glob(f'images_out/{params.file_namespace}/{base_name}_*.png')
frames.sort(key = lambda s: int(split_file(s)[0].split('_')[-1]))
all_frames.extend(frames)
start_frame = 0#@param{type:"number"}
all_frames = all_frames[start_frame:]
fps = params.frames_per_second#@param{type:"raw"}
total_frames = len(all_frames)
if total_frames == 0:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: no frames to render in images_out/{params.file_namespace}")
frames = []
for filename in tqdm(all_frames):
frames.append(Image.open(filename))
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f"videos/{base_name}.mp4"], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video complete.")
#@title 3.2 Download the last exported video
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
try:
from pytti.Notebook import get_last_file
except ModuleNotFoundError:
if drive_mounted:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('ERROR: please run setup (step 1.3).')
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1.3).')
try:
params
except NameError:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError("ERROR: please run parameters (step 2.1).")
from google.colab import files
try:
base_name = params.file_namespace if run_number == 0 else (params.file_namespace+f"({run_number})")
filename = f'{base_name}.mp4'
except NameError:
filename, i = get_last_file(f'videos',
f'^(?P<pre>{re.escape(params.file_namespace)}\\(?)(?P<index>\\d*)(?P<post>\\)?\\.mp4)$')
if path_exists(f'videos/{filename}'):
files.download(f"videos/{filename}")
else:
if path_exists(f"/content/drive/MyDrive"):
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"ERROR: video videos/{filename} does not exist.")
else:
#THIS IS NOT AN ERROR. This is the code that would
#make an error if something were wrong.
raise RuntimeError(f"WARNING: Drive is not mounted.\nERROR: video videos/{filename} does not exist.")
###Output
_____no_output_____
###Markdown
Batch SettingsBe Advised: google may penalize you for sustained colab GPU utilization, even if you are a PRO+ subscriber. Tread lightly with batch runs, you don't wanna end up in GPU jail. FYI: the batch setting feature below may not work at present. We recommend using the CLI for batch jobs, see usage instructions at https://github.com/pytti-tools/pytti-core . The code below will probably be removed in the near future. Batch SetingsWARNING: If you use google colab (even with pro and pro+) GPUs for long enought google will throttle your account. Be careful with batch runs if you don't want to get kicked.
###Code
#@title batch settings
# ngl... this probably doesn't work right now.
from os.path import exists as path_exists
if path_exists(gdrive_fpath):
%cd {gdrive_fpath}
drive_mounted = True
else:
drive_mounted = False
try:
from pytti.Notebook import change_tqdm_color, save_batch
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
change_tqdm_color()
try:
import exrex, random, glob
except ModuleNotFoundError:
if drive_mounted:
raise RuntimeError('ERROR: please run setup (step 1).')
else:
raise RuntimeError('WARNING: drive is not mounted.\nERROR: please run setup (step 1).')
from numpy import arange
import itertools
def all_matches(s):
return list(exrex.generate(s))
def dict_product(dictionary):
return [dict(zip(dictionary, x)) for x in itertools.product(*dictionary.values())]
#these are used to make the defaults look pretty
model_default = None
random_seed = None
def define_parameters():
locals_before = locals().copy()
scenes = ["list","your","runs"] #@param{type:"raw"}
scene_prefix = ["all "," permutations "," are run "] #@param{type:"raw"}
scene_suffix = [" that", " makes", " 27" ] #@param{type:"raw"}
interpolation_steps = [0] #@param{type:"raw"}
steps_per_scene = [300] #@param{type:"raw"}
direct_image_prompts = [""] #@param{type:"raw"}
init_image = [""] #@param{type:"raw"}
direct_init_weight = [""] #@param{type:"raw"}
semantic_init_weight = [""] #@param{type:"raw"}
image_model = ["Limited Palette"] #@param{type:"raw"}
width = [180] #@param{type:"raw"}
height = [112] #@param{type:"raw"}
pixel_size = [4] #@param{type:"raw"}
smoothing_weight = [0.05] #@param{type:"raw"}
vqgan_model = ["sflckr"] #@param{type:"raw"}
random_initial_palette = [False] #@param{type:"raw"}
palette_size = [9] #@param{type:"raw"}
palettes = [8] #@param{type:"raw"}
gamma = [1] #@param{type:"raw"}
hdr_weight = [1.0] #@param{type:"raw"}
palette_normalization_weight = [1.0] #@param{type:"raw"}
show_palette = [False] #@param{type:"raw"}
target_palette = [""] #@param{type:"raw"}
lock_palette = [False] #@param{type:"raw"}
animation_mode = ["off"] #@param{type:"raw"}
sampling_mode = ["bicubic"] #@param{type:"raw"}
infill_mode = ["wrap"] #@param{type:"raw"}
pre_animation_steps = [100] #@param{type:"raw"}
steps_per_frame = [50] #@param{type:"raw"}
frames_per_second = [12] #@param{type:"raw"}
direct_stabilization_weight = [""] #@param{type:"raw"}
semantic_stabilization_weight = [""] #@param{type:"raw"}
depth_stabilization_weight = [""] #@param{type:"raw"}
edge_stabilization_weight = [""] #@param{type:"raw"}
flow_stabilization_weight = [""] #@param{type:"raw"}
video_path = [""] #@param{type:"raw"}
frame_stride = [1] #@param{type:"raw"}
reencode_each_frame = [True] #@param{type:"raw"}
flow_long_term_samples = [0] #@param{type:"raw"}
translate_x = ["0"] #@param{type:"raw"}
translate_y = ["0"] #@param{type:"raw"}
translate_z_3d = ["0"] #@param{type:"raw"}
rotate_3d = ["[1,0,0,0]"] #@param{type:"raw"}
rotate_2d = ["0"] #@param{type:"raw"}
zoom_x_2d = ["0"] #@param{type:"raw"}
zoom_y_2d = ["0"] #@param{type:"raw"}
lock_camera = [True] #@param{type:"raw"}
field_of_view = [60] #@param{type:"raw"}
near_plane = [1] #@param{type:"raw"}
far_plane = [10000] #@param{type:"raw"}
file_namespace = ["Basic Batch"] #@param{type:"raw"}
allow_overwrite = [False]
display_every = [50] #@param{type:"raw"}
clear_every = [0] #@param{type:"raw"}
display_scale = [1] #@param{type:"raw"}
save_every = [50] #@param{type:"raw"}
backups = [2] #@param{type:"raw"}
show_graphs = [False] #@param{type:"raw"}
approximate_vram_usage = [False] #@param{type:"raw"}
ViTB32 = [True] #@param{type:"raw"}
ViTB16 = [False] #@param{type:"raw"}
RN50 = [False] #@param{type:"raw"}
RN50x4 = [False] #@param{type:"raw"}
learning_rate = [None] #@param{type:"raw"}
reset_lr_each_frame = [True] #@param{type:"raw"}
seed = [None] #@param{type:"raw"}
cutouts = [40] #@param{type:"raw"}
cut_pow = [2] #@param{type:"raw"}
cutout_border = [0.25] #@param{type:"raw"}
border_mode = ["clamp"] #@param{type:"raw"}
locals_after = locals().copy()
for k in locals_before.keys():
del locals_after[k]
del locals_after['locals_before']
return locals_after
param_dict = define_parameters()
batch_list = dict_product(param_dict)
namespace = batch_list[0]['file_namespace']
if glob.glob(f'images_out/{namespace}/*.png'):
print(f"WARNING: images_out/{namespace} contains images. Batch indicies may not match filenames unless restoring.")
# @title Licensed under the MIT License
# Copyleft (c) 2021 Henry Rachootin
# Copyright (c) 2022 David Marx
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###Output
_____no_output_____ |
Advanced_Lane_Detection.ipynb | ###Markdown
The full pipeline for advanced lane recognition by Alok Rao
###Code
import numpy as np
import os
import cv2
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import glob as glob
from moviepy.editor import VideoFileClip
from IPython.display import HTML
###Output
_____no_output_____
###Markdown
Parameter Class to load all parameters
###Code
class Parameters():
def BinaryImageThreshold(self, sobelThreshold, sChannelThreshold, redChannelThreshold, sobelKernelSize):
self.sobelThreshold = sobelThreshold
self.sChannelThreshold = sChannelThreshold
self.redChannelThreshold = redChannelThreshold
self.sobelKernelSize = sobelKernelSize
def FindWindowLanesParameters(self, nWindows, minPix):
self.nWindows = nWindows
self.minPix = minPix
###Output
_____no_output_____
###Markdown
Camera calibration
###Code
def GetCameraCorrection(imagePath):
objPoints = []
imgPoints = []
objP = np.zeros((9*6,3),np.float32)
objP[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
#first find thee corners
for filename in imagePath:
img=mpimg.imread(filename)
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)
if ret == True:
objPoints.append(objP)
imgPoints.append(corners)
# Returns camera calibration
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objPoints, imgPoints,
gray.shape[::-1], None,
None)
return mtx, dist
imageLocation = glob.glob('camera_cal/*.jpg')
# Calibrate camera and return calibration data
mtx, dist = GetCameraCorrection(imageLocation)
###Output
_____no_output_____
###Markdown
Filter image to get rough lanes
###Code
def GetBinaryImage(image, mtx, dst):
undistortedImage = cv2.undistort(image, mtx, dst, None, mtx)
sobelThreshold = parameters.sobelThreshold
sChannelThreshold = parameters.sChannelThreshold
redChannelThreshold = parameters.redChannelThreshold
sobelKernelSize = parameters.sobelKernelSize
redImage = undistortedImage[:,:,0]
hsvImage = cv2.cvtColor(undistortedImage, cv2.COLOR_RGB2HSV).astype(np.float)
# Convert to HLS colorspace
hlsImage = cv2.cvtColor(undistortedImage, cv2.COLOR_RGB2HLS).astype(np.float)
sImage = hlsImage[:,:,2]
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
vImage = hsvImage[:,:,2]
#Applying Sobel Filter, creaating Binary image
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize = sobelKernelSize) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobelx = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
sxbinary = np.zeros_like(scaled_sobelx)
sxbinary[(scaled_sobelx >= sobelThreshold[0]) & (scaled_sobelx <= sobelThreshold[1])] = 1
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize = sobelKernelSize) # Take the derivative in x
abs_sobely = np.absolute(sobely) # Absolute y derivative to accentuate lines away from horizontal
scaled_sobely = np.uint8(255*abs_sobely/np.max(abs_sobely))
sybinary = np.zeros_like(scaled_sobely)
sybinary[(scaled_sobely >= sobelThreshold[0]) & (scaled_sobely <= sobelThreshold[1])] = 1
#Applying thresholding to S channel
sBinary = np.zeros_like(sImage)
sBinary[(sImage >= sChannelThreshold[0]) & (sImage <= sChannelThreshold[1])] = 1
#Applying thresold Red channel
rBinary = np.zeros_like(redImage)
rBinary[(redImage>=redChannelThreshold[0]) & (redImage<=redChannelThreshold[1])] = 1
#Applying thresold Red channel
vBinary = np.zeros_like(vImage)
vBinary[(vImage>=230) & (vImage<=255)] = 1
#Stacking for debugging
#combinedBinary = np.dstack(( np.zeros_like(sxbinary), sxbinary, sBinary)) * 255
# Combine the binary thresholds
combinedBinary = np.zeros_like(sxbinary)
combinedBinary[(vBinary == 1) | (sxbinary == 1) | (rBinary == 1)] = 1
#combinedBinary[(vBinary == 1)] = 1
return combinedBinary
###Output
_____no_output_____
###Markdown
Convert image to to down perspective
###Code
def PerspectiveTransform(image, mtx, dst):
#Perspective transform to obtain a top down image
#First obtain binary image
binaryImage = GetBinaryImage(image, mtx, dst)
#get image size
imageSize = (binaryImage.shape[1], binaryImage.shape[0])
#get 4 reference points
source = np.float32([[585,455],[705,455],[1130,720],[190,720]])
#4 points from the image
offset = 200 # offset for dst points
dst = np.float32([
[offset, 0],
[imageSize[0]-offset, 0],
[imageSize[0]-offset, imageSize[1]],
[offset, imageSize[1]]
])
# Use cv2.getPerspectiveTransform() to get M, the transform matrix
M = cv2.getPerspectiveTransform(source, dst)
# Use cv2.warpPerspective() to warp the image to a top-down view
topDown = cv2.warpPerspective(binaryImage, M, imageSize)
return topDown, M
###Output
_____no_output_____
###Markdown
Find the lane lines on the perspective image
###Code
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
def findLines(image, nwindows=9, margin=110, minpix=50):
"""
Find the polynomial representation of the lines in the `image` using:
- `nwindows` as the number of windows.
- `margin` as the windows margin.
- `minpix` as minimum number of pixes found to recenter the window.
- `ym_per_pix` meters per pixel on Y.
- `xm_per_pix` meters per pixels on X.
Returns (left_fit, right_fit, left_lane_inds, right_lane_inds, out_img, nonzerox, nonzeroy)
"""
# Make a binary and transform image
binary_warped, M = PerspectiveTransform(image, mtx, dist)
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Set height of windows
window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Fit a second order polynomial to each
left_fit_m = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_m = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
return (left_fit, right_fit, left_fit_m, right_fit_m, left_lane_inds, right_lane_inds, out_img, nonzerox, nonzeroy, M)
def calculateCurvature(yRange, left_fit_cr):
"""
Returns the curvature of the polynomial `fit` on the y range `yRange`.
"""
return ((1 + (2*left_fit_cr[0]*yRange*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
def drawLine(img, left_fit, right_fit, M):
"""
Draw the lane lines on the image `img` using the poly `left_fit` and `right_fit`.
"""
yMax = img.shape[0]
ploty = np.linspace(0, yMax - 1, yMax)
color_warp = np.zeros_like(img).astype(np.uint8)
# Calculate points.
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, np.linalg.inv(M), (img.shape[1], img.shape[0]))
return cv2.addWeighted(img, 1, newwarp, 0.3, 0)
from moviepy.editor import VideoFileClip
class Lane():
def __init__(self):
self.left_fit = None
self.right_fit = None
self.left_fit_m = None
self.right_fit_m = None
self.leftCurvature = None
self.rightCurvature = None
def calculateLanes(img):
"""
Calculates the lane on image `img`.
"""
left_fit, right_fit, left_fit_m, right_fit_m, _, _, _, _, _,M = findLines(img)
# Calculate curvature
yRange = 719
leftCurvature = calculateCurvature(yRange, left_fit_m)
rightCurvature = calculateCurvature(yRange, right_fit_m)
# Calculate vehicle center
xMax = img.shape[1]*xm_per_pix
yMax = img.shape[0]*ym_per_pix
vehicleCenter = xMax / 2
lineLeft = left_fit_m[0]*yMax**2 + left_fit_m[1]*yMax + left_fit_m[2]
lineRight = right_fit_m[0]*yMax**2 + right_fit_m[1]*yMax + right_fit_m[2]
lineMiddle = lineLeft + (lineRight - lineLeft)/2
diffFromVehicle = lineMiddle - vehicleCenter
return (left_fit, right_fit, left_fit_m, right_fit_m, leftCurvature, rightCurvature, diffFromVehicle, M)
def displayLanes(img, left_fit, right_fit, left_fit_m, right_fit_m, leftCurvature, rightCurvature, diffFromVehicle,M):
"""
Display the lanes information on the image.
"""
output = drawLine(img, left_fit, right_fit,M)
if diffFromVehicle > 0:
message = '{:.2f} m right'.format(diffFromVehicle)
else:
message = '{:.2f} m left'.format(-diffFromVehicle)
# Draw info
font = cv2.FONT_HERSHEY_SIMPLEX
fontColor = (255, 255, 255)
cv2.putText(output, 'Left curvature: {:.0f} m'.format(leftCurvature), (50, 50), font, 1, fontColor, 2)
cv2.putText(output, 'Right curvature: {:.0f} m'.format(rightCurvature), (50, 120), font, 1, fontColor, 2)
cv2.putText(output, 'Vehicle is {} of center'.format(message), (50, 190), font, 1, fontColor, 2)
return output
def videoPipeline(inputVideo, outputVideo):
"""
Process the `inputVideo` frame by frame to find the lane lines, draw curvarute and vehicle position information and
generate `outputVideo`
"""
myclip = VideoFileClip(inputVideo)
parameters = Parameters()
parameters.BinaryImageThreshold((70,160), (110,140), (215,255), 3)
parameters.FindWindowLanesParameters(20,50)
leftLane = Lane()
rightLane = Lane()
def processImage(img):
left_fit, right_fit, left_fit_m, right_fit_m, leftCurvature, rightCurvature, diffFromVehicle,M = calculateLanes(img)
if leftCurvature > 10000:
left_fit = leftLane.left_fit
left_fit_m = leftLane.left_fit_m
leftCurvature = leftLane.leftCurvature
else:
leftLane.left_fit = left_fit
leftLane.left_fit_m = left_fit_m
leftLane.leftCurvature = leftCurvature
if rightCurvature > 10000:
right_fit = rightLane.right_fit
right_fit_m = rightLane.right_fit_m
rightCurvature = rightLane.rightCurvature
else:
rightLane.right_fit = right_fit
rightLane.right_fit_m = right_fit_m
rightLane.rightCurvature = rightCurvature
return displayLanes(img, left_fit, right_fit, left_fit_m, right_fit_m, leftCurvature, rightCurvature, diffFromVehicle,M)
clip = myclip.fl_image(processImage)
clip.write_videofile(outputVideo, audio=False)
# Project video
videoPipeline('project_video.mp4', 'Advanced_Lane_Detection.mp4')
###Output
[MoviePy] >>>> Building video Advanced_Lane_Detection.mp4
[MoviePy] Writing video Advanced_Lane_Detection.mp4
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- Camera Calibration
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('./camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
plt.imshow(img)
plt.show
###Output
_____no_output_____
###Markdown
Distortion Restoration
###Code
import pickle
# Test undistortion on an image
img = cv2.imread('camera_cal/calibration2.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints,
imgpoints,
img_size,
None, None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('camera_cal/test_undist.jpg', dst)
# Save the camera calibration result for later use
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "camera_cal/wide_dist_pickle.p", "wb" ) )
dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)
def undistort_image(image):
return cv2.undistort(image, mtx, dist, None, mtx)
###Output
_____no_output_____
###Markdown
Colour and Gradient Threshold
###Code
def abs_thresh(img, thresh=(0, 255)):
binary = np.zeros_like(img)
binary[(img >= thresh[0]) & (img <= thresh[1])] = 1
return binary
def sobel_thresh(img,
orient='x',
sobel_kernel=3,
thresh=(0, 255)):
# Calculate derivatives of given orientation
if orient == 'x':
sobel = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
elif orient == 'y':
sobel = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
else:
raise 'orientation can only be x or y'
# Take the absolute values of derivatives
abs_sobel = np.absolute(sobel)
# Normalization and convert to np.uint8
scaled_sobel = np.uint8(255 * abs_sobel/np.max(abs_sobel))
# Thresholding
grad_binary = np.zeros_like(scaled_sobel)
grad_binary[(scaled_sobel >= thresh[0]) &
(scaled_sobel <= thresh[1])] = 1
return grad_binary
def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):
# Calculate gradients in x and y
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# Calculate the magnitude
mag_sobelxy = np.sqrt(np.square(sobelx) + np.square(sobely))
# Normalization and convert to type = np.uint8
scaled_sobel = np.uint8(255 * mag_sobelxy/np.max(mag_sobelxy))
# Thresholding
mag_binary = np.zeros_like(scaled_sobel)
mag_binary[(scaled_sobel >= mag_thresh[0]) &
(scaled_sobel <= mag_thresh[1])] = 1
return mag_binary
def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):
# Calculate gradients in x and y
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# Calculate direction
abs_sobelx = np.absolute(sobelx)
abs_sobely = np.absolute(sobely)
gradient_dir = np.arctan2(abs_sobely, abs_sobelx)
# Thresholding
dir_binary = np.zeros_like(gradient_dir)
dir_binary[(gradient_dir >= thresh[0]) &
(gradient_dir <= thresh[1])] = 1
return dir_binary
###Output
_____no_output_____
###Markdown
Perspective Transform
###Code
def drawQuad(image, points, color=[255, 0, 0], thickness=4):
p1, p2, p3, p4 = points
cv2.line(image, tuple(p1), tuple(p2), color, thickness)
cv2.line(image, tuple(p2), tuple(p3), color, thickness)
cv2.line(image, tuple(p3), tuple(p4), color, thickness)
cv2.line(image, tuple(p4), tuple(p1), color, thickness)
def perspective_transform(image, debug=False, size_top=70, size_bottom=370):
height, width = image.shape[0:2]
output_size = height/2
src = np.float32([[(width/2) - size_top, height*0.65],
[(width/2) + size_top, height*0.65],
[(width/2) + size_bottom, height-50],
[(width/2) - size_bottom, height-50]])
dst = np.float32([[(width/2) - output_size, (height/2) - output_size],
[(width/2) + output_size, (height/2) - output_size],
[(width/2) + output_size, (height/2) + output_size],
[(width/2) - output_size, (height/2) + output_size]])
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(image, M, (width, height), flags=cv2.INTER_LINEAR)
if debug:
drawQuad(image, src, [255, 0, 0])
drawQuad(image, dst, [255, 255, 0])
plt.imshow(image)
plt.show()
return warped
###Output
_____no_output_____
###Markdown
Lane-line Points Masking
###Code
def detect_edges(image, debug=False):
# Choose a Sobel kernel size
ksize = 5 # Choose a larger odd number to smooth gradient measurements
# Apply each of the thresholding functions
red_channel = image[:, :, 0]
equ = cv2.equalizeHist(red_channel)
red_binary = abs_thresh(equ, thresh=(250, 255))
gradx = sobel_thresh(red_channel, orient='x', sobel_kernel=ksize, thresh=(30, 255))
# grady = abs_sobel_thresh(red_channel, orient='y', sobel_kernel=ksize, thresh=(30, 255))
# mag_binary = mag_thresh(red_channel, sobel_kernel=ksize, mag_thresh=(30, 255))
# dir_binary = dir_threshold(red_channel, sobel_kernel=ksize, thresh=(0.7, 1.3))
combined = np.zeros_like(red_channel)
combined[(red_binary == 1) | (gradx == 1)] = 1
if debug:
# Plot the result
f, ((a1, a2), (b1, b2), (c1, c2), (d1, d2)) = plt.subplots(4, 2, figsize=(24, 32))
f.tight_layout()
a1.imshow(red_channel, cmap='gray')
a1.set_title('Red Channel', fontsize=50)
a2.imshow(combined, cmap='gray')
a2.set_title('Output', fontsize=50)
b1.imshow(equ, cmap='gray')
b1.set_title('Equalized', fontsize=50)
b2.imshow(red_binary, cmap='gray')
b2.set_title('Red Binary', fontsize=50)
c1.imshow(gradx, cmap='gray')
c1.set_title('Gradient X', fontsize=50)
c2.imshow(grady, cmap='gray')
c2.set_title('Gradient Y', fontsize=50)
d1.imshow(mag_binary, cmap='gray')
d1.set_title('Gradient Magnitude', fontsize=50)
d2.imshow(dir_binary, cmap='gray')
d2.set_title('Gradient Direction', fontsize=50)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
return combined
###Output
_____no_output_____
###Markdown
Sliding Window Histogram
###Code
def fit_polynomials(binary_warped, debug=False):
# Assuming you have created a warped binary image called "binary_warped"
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Choose the number of sliding windows
nwindows = 9
# Set height of windows
window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,
(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),
(0,255,0), 2)
cv2.rectangle(out_img,
(win_xright_low,win_y_low),
(win_xright_high,win_y_high),
(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) &
(nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) &
(nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) &
(nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) &
(nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
if debug:
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
cv2.imwrite('output_images/test.jpg',
cv2.cvtColor(np.float32(out_img), cv2.COLOR_RGB2BGR))
plt.imshow(out_img)
plt.plot(left_fitx, ploty, color='yellow')
plt.plot(right_fitx, ploty, color='yellow')
plt.xlim(0, 1280)
plt.ylim(720, 0)
plt.show()
return ploty, left_fitx, right_fitx, left_fit, right_fit
def fast_fit_polynomials(binary_warped, left_fit, right_fit):
# Assume you now have a new warped binary image
# from the next frame of video (also called "binary_warped")
# It's now much easier to find line pixels!
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
margin = 100
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) &
(nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) &
(nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
return ploty, left_fitx, right_fitx, left_fit, right_fit
###Output
_____no_output_____
###Markdown
Curvature Calculation
###Code
def get_curvature(ploty, left_fitx, right_fitx):
y_eval = np.max(ploty)
# Define conversions in x and y from pixels space to meters
ym_per_pix = 20 / 720 # meters per pixel in y dimension
xm_per_pix = 3.7 / 700 # meters per pixel in x dimension
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2)
# Calculate the new radii of curvature
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
# Now our radius of curvature is in meters
# print(left_curverad, 'm', right_curverad, 'm')
return (left_curverad+right_curverad)/2
def get_perspective_rectangles(image):
size_top=70
size_bottom=370
height, width = image.shape[0:2]
output_size = height/2
src = np.float32([[(width/2) - size_top, height*0.65],
[(width/2) + size_top, height*0.65],
[(width/2) + size_bottom, height-50],
[(width/2) - size_bottom, height-50]])
dst = np.float32([[(width/2) - output_size, (height/2) - output_size],
[(width/2) + output_size, (height/2) - output_size],
[(width/2) + output_size, (height/2) + output_size],
[(width/2) - output_size, (height/2) + output_size]])
return src, dst
###Output
_____no_output_____
###Markdown
Lane Annotation
###Code
def render_lane(image, ploty, left_fitx, right_fitx):
src, dst = get_perspective_rectangles(image)
Minv = cv2.getPerspectiveTransform(dst, src)
# Create an image to draw the lines on
warp_zero = np.zeros_like(image[:,:,0]).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(image, 1, newwarp, 0.3, 0)
return result
###Output
_____no_output_____
###Markdown
Process Pipeline
###Code
global_left_fit = None
global_right_fit = None
def process_image(input_image):
global global_left_fit
global global_right_fit
# step 1: undistort image
image_undistort = undistort_image(input_image)
# step 2: perspective transform
image_transformed = perspective_transform(image_undistort)
# step 3: detect binary lane markings
image_binary = detect_edges(image_transformed)
# step 4: fit polynomials
if global_left_fit is not None:
ploty, left_fitx, right_fitx, left_fit, right_fit = fast_fit_polynomials(image_binary, global_left_fit, global_right_fit)
else:
ploty, left_fitx, right_fitx, left_fit, right_fit = fit_polynomials(image_binary)
global_left_fit = left_fit
global_right_fit = right_fit
# step 5: draw lane
output_lane = render_lane(image_undistort, ploty, left_fitx, right_fitx)
# step 6: print curvature
curv = get_curvature(ploty, left_fitx, right_fitx)
output_curvature = cv2.putText(output_lane,
"Curvature: " + str(int(curv)) + "m",
(900, 80),
cv2.FONT_HERSHEY_SIMPLEX,
1, [0, 0, 0], 2)
# step 7: print road position
xm_per_pix = 3.7/700
left_lane_pos = left_fitx[len(left_fitx)-1]
right_lane_pos = right_fitx[len(right_fitx)-1]
road_pos = (((left_lane_pos + right_lane_pos) / 2) - 640) * xm_per_pix
output_road_pos = cv2.putText(output_lane,
"Offset: {0:.2f}m".format(road_pos),
(900, 120),
cv2.FONT_HERSHEY_SIMPLEX,
1, [0, 0, 0], 2)
# output from processing step
output_image = output_road_pos
# function should always output color images
if len(output_image.shape) == 2:
return cv2.cvtColor(np.float32(output_image), cv2.COLOR_GRAY2RGB)
else:
return output_image
###Output
_____no_output_____
###Markdown
Test on Images
###Code
test_image = mpimg.imread('test_images/test5.jpg')
test_output = process_image(test_image)
# plt.imshow(test_image)
# plt.show()
plt.imshow(test_output)
# o = cv2.cvtColor(test_output, cv2.COLOR_RGB2BGR)
# cv2.imwrite('output_images/test.jpg', cv2.cvtColor(test_output, cv2.COLOR_RGB2BGR))
###Output
_____no_output_____
###Markdown
Video Annotation
###Code
from moviepy.editor import VideoFileClip
from IPython.display import HTML
project_output_file = "project_output_1.mp4"
project_video = VideoFileClip("harder_challenge_video.mp4")
project_output = project_video.fl_image(process_image)
%time project_output.write_videofile(project_output_file, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output_file))
###Output
_____no_output_____
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
###Code
%matplotlib qt
from helper import *
#Calibrating camera
ret, mtx, dist, rvecs, tvecs = calibration()
# Tuning Warp Images for src and dst using straght_line image
straight = cv2.imread('test_images/straight_lines2.jpg')
xsize, ysize = (straight.shape[1], straight.shape[0])
# Pts for polylines use int32 type
src = np.int32([[xsize//2 - 50, 450 ],
[xsize//2 + 53, 450],
[xsize//2 + 465,ysize-10],
[xsize//2 - 430,ysize-10]])
cv2.polylines(straight, [src], True, (0,0,255), 3)
cv2.imshow("Straight",straight)
# Assume length of road cover is 30 meters and lanes are 3.7 meters apart
src = src.astype(np.float32)
dst = np.float32([[300, 0],
[1000, 0],
[1000, 720],
[300, 720]])
def filter_image(original_img):
#Undistort Image
undist = cv2.undistort(original_img, mtx, dist, None, mtx)
#Create Threshold
hls_img = hls_select(undist, thresh=(100,255))
mag_img = mag_thresh(undist, mag_thresh=(50, 255))
dir_img = dir_threshold(undist, thresh=(np.pi/4,2*np.pi/4))
sobelx_img = abs_sobel_thresh(undist, orient='x', thresh_min=50, thresh_max=100)
sobely_img = abs_sobel_thresh(undist, orient='y', thresh_min=50, thresh_max=100)
#Combine Threshold
combined = np.zeros_like(dir_img)
combined[((sobelx_img == 255) & (sobely_img == 255)) | ((mag_img == 255) & (dir_img == 255)) | (hls_img == 255)] = 255
return combined
def find_lane_pixels(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
left_line.line_base_pos = leftx_base
right_line.line_base_pos = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
### TO-DO: Find the four below boundaries of the window ###
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
### Identify the nonzero pixels in x and y within the window ###
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
### If you found > minpix pixels, recenter next window ###
### (`right` or `leftx_current`) on their mean position ###
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Remove this when you add your function
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
left_line.allx , left_line.ally = leftx, lefty
right_line.allx, right_line.ally = rightx, righty
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
### Fit a second order polynomial to each using `np.polyfit` ###
# For Pixel
left_fit= np.polyfit(lefty, leftx, 2)
right_fit= np.polyfit(righty, rightx, 2)
left_line.pix_current_fit = left_fit
right_line.pix_current_fit = right_fit
# For Actual Road, scale it with ym and xm per pixel
leftnewfit = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
rightnewfit = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
left_line.diffs = left_line.current_fit - leftnewfit
right_line.diffs = right_line.current_fit - rightnewfit
left_line.current_fit = leftnewfit
right_line.current_fit = rightnewfit
# Generate x and y values for plotting
Line.ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
# left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
# right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
left_line.pix_fitx = left_fit[0]*Line.ploty**2 + left_fit[1]*Line.ploty + left_fit[2]
right_line.pix_fitx = right_fit[0]*Line.ploty**2 + right_fit[1]*Line.ploty + right_fit[2]
left_line.detected = True
right_line.detected = True
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_line.pix_fitx = 1*Line.ploty**2 + 1*Line.ploty
right_line.pix_fitx = 1*Line.ploty**2 + 1*Line.ploty
left_line.detected = False
right_line.detected = False
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Plots the left and right polynomials on the lane lines for pixel plotting
# plt.imshow(out_img)
# plt.plot(left_line.pix_fitx, Line.ploty, color='yellow')
# plt.plot(right_line.pix_fitx, Line.ploty, color='yellow')
# plt.xlim(0, 1280)
# plt.ylim(720, 0)
return out_img
def fit_poly(img_shape, leftx, lefty, rightx, righty):
### Fit a second order polynomial to each with np.polyfit() ###
# For Pixel
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# For Actual Road
leftnewfit = np.polyfit(lefty*ym_per_pix, leftx*xm_per_pix, 2)
rightnewfit = np.polyfit(righty*ym_per_pix, rightx*xm_per_pix, 2)
left_line.diffs = left_line.current_fit - leftnewfit
right_line.diffs = right_line.current_fit - rightnewfit
left_line.current_fit = leftnewfit
right_line.current_fit = rightnewfit
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
### Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
return left_fitx, right_fitx, ploty
def search_around_poly(binary_warped):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 30
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
left_fit = left_line.pix_current_fit
right_fit = right_line.pix_current_fit
### Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
left_line.allx, left_line.ally = leftx, lefty
right_line.allx, right_line.ally = rightx, righty
# Fit new polynomials
left_line.pix_fitx, right_line.pix_fitx, Line.ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
left_line.line_base_pose = left_line.pix_fitx[719]
right_line.line_base_pos = right_line.pix_fitx[719]
## Visualization ##
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_line.pix_fitx-margin, Line.ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_line.pix_fitx+margin,
Line.ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_line.pix_fitx-margin, Line.ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_line.pix_fitx+margin,
Line.ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
# Plot the polynomial lines onto the image
# plt.imshow(out_img)
# plt.plot(left_line.pix_fitx, Line.ploty, color='yellow')
# plt.plot(right_line.pix_fitx, Line.ploty, color='yellow')
# plt.xlim(0, 1280)
# plt.ylim(720, 0)
## End visualization steps ##
return result
# Create an image to draw the lines on
def project_lane(original, warped):
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_line.pix_fitx, Line.ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_line.pix_fitx, Line.ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
# newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0]))
newwarp = warper(color_warp, dst, src)
# Combine the result with the original image
result = cv2.addWeighted(original, 1, newwarp, 0.3, 0)
# cv2.imshow("project lane", result)
return result
def measure_curvature(left_fit, right_fit):
'''
Calculates the curvature of polynomial functions in pixels.
'''
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = 720
# Implement the calculation of R_curve (radius of curvature) #####
left_radius = ((1 + (2*left_fit[0]*y_eval*ym_per_pix + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0])
right_radius = ((1 + (2*right_fit[0]*y_eval*ym_per_pix + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0])
return left_radius, right_radius
#Pipeline for Lane Detection
def process_image(img):
combined = filter_image(img)
warped = warper(combined, src, dst)
if left_line.detected is False or right_line.detected is False:
outimg = fit_polynomial(warped)
#left_pix_current_fit, right_pix_current_fit_y = fit_polynomial(warped)
else:
outimg = search_around_poly(warped)
#left_pix_current_fit, right_pix_current_fit_y = search_around_poly(warped, left_current_fit, right_current_fit)
left_line.radius_of_curvature, right_line.radius_of_curvature = measure_curvature(left_line.current_fit, right_line.current_fit)
if np.any(abs(right_line.diffs) > 0.05) or np.any(abs(left_line.diffs) > 0.05):
left_line.detected = False
right_line.detected = False
#project_lane()
projected = project_lane(img, warped)
cv2.putText(projected, "Radius of Curvature: " +
str(np.round((left_line.radius_of_curvature + right_line.radius_of_curvature)/2, 3)) + " m",
(20,50), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (255,0,0), 2, cv2.LINE_AA)
centerdiff = (left_line.line_base_pos + right_line.line_base_pos)/2 - img.shape[1]/2
closerside = ""
if centerdiff > 0:
closerside = "left"
else:
closerside = "right"
cv2.putText(projected, "Vehicle is " +
str(abs(np.round(((left_line.line_base_pos + right_line.line_base_pos)/2 - warped.shape[1]/2) * xm_per_pix, 3))) + "m " +
closerside + " of center",
(20,120), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (255,0,0), 2, cv2.LINE_AA)
return projected
#Testing Single Image
file = 'test_images/test5.jpg'
original = cv2.imread(file)
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
left_line = Line()
right_line = Line()
out = process_image(original)
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
left_line = Line()
right_line = Line()
white_output = 'output_images/harder_challenge_video_output.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("harder_challenge_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
t: 0%| | 0/1199 [00:00<?, ?it/s, now=None] |
docs/notebooks/Data_structures.ipynb | ###Markdown
Data structuresIn this notebook, we will explore some of the major data structures used in SLEAP and how they can be manipulated when generating predictions from trained models.A quick overview of the data structures before we start:- `Point`/`PredictedPoint` → Contains the `x` and `y` coordinates (and `score` for predictions) of a landmark.- `Instance`/`PredictedInstance` → Contains a set of `Point`/`PredictedPoint`s. This represent a single individual within a frame and may also contain an associated `Track`.- `Skeleton` → Defines the nodes and edges that define the set of unique landmark types that each point represents, e.g., "head", "tail", etc. This *does not contain positions* -- those are stored in individual `Point`s.- `LabeledFrame` → Contains a set of `Instance`/`PredictedInstance`s for a single frame.- `Labels` → Contains a set of `LabeledFrame`s and the associated metadata for the videos and other information related to the project or predictions. 1. Setup SLEAP and dataWe'll start by installing SLEAP and downloading some data and models to play around with.If you get a dependency error in subsequent cells, just click **Runtime** → **Restart runtime** to reload the packages.
###Code
# This should take care of all the dependencies on colab:
!pip uninstall -y opencv-python opencv-contrib-python && pip install sleap
# But to do it locally, we'd recommend the conda package (available on Windows + Linux):
# conda create -n sleap -c sleap -c conda-forge -c nvidia sleap
# Test video:
!wget https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.mp4
# Test video labels (from predictions/not necessary for inference benchmarking):
!wget https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.slp
# Bottom-up model:
# !wget https://storage.googleapis.com/sleap-data/reference/flies13/bu.210506_230852.multi_instance.n%3D1800.zip
# Top-down model (two-stage):
!wget https://storage.googleapis.com/sleap-data/reference/flies13/centroid.fast.210504_182918.centroid.n%3D1800.zip
!wget https://storage.googleapis.com/sleap-data/reference/flies13/td_fast.210505_012601.centered_instance.n%3D1800.zip
!ls -lah
import sleap
# This prevents TensorFlow from allocating all the GPU memory, which leads to issues on
# some GPUs/platforms:
sleap.disable_preallocation()
# This would hide GPUs from the TensorFlow altogether:
# sleap.use_cpu_only()
# Print some info:
sleap.versions()
sleap.system_summary()
###Output
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
SLEAP: 1.2.2
TensorFlow: 2.8.0
Numpy: 1.21.5
Python: 3.7.13
OS: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
GPUs: 1/1 available
Device: /physical_device:GPU:0
Available: True
Initalized: False
Memory growth: True
###Markdown
2. Data structures and inference SLEAP can read videos in a variety of different formats through the `sleap.load_video` high level API. Once loaded, the `sleap.Video` object allows you to access individual frames as if the it were a standard numpy array.**Note:** The actual frames are not loaded until you access them so we don't blow up our memory when using long videos.
###Code
# Videos can be represented agnostic to the backend format
video = sleap.load_video("[email protected]")
# sleap.Video objects have a numpy-like interface:
print(video.shape)
# And we can load images in the video using array indexing:
imgs = video[:4]
print(imgs.shape, imgs.dtype)
###Output
(2560, 1024, 1024, 1)
(4, 1024, 1024, 1) uint8
###Markdown
The high level interface for loading models (`sleap.load_model()`) takes model folders or zipped folders as input. These are outputs from our training procedure and need to contain a `"best_model.h5"` and `"training_config.json"`. `best_model.h5` is an HDF5-serialized tf.keras.Model that was checkpointed duringtraining. It includes the architecture as well as the weights, so they're standaloneand don't need SLEAP -- BUT they do not contain the inference methods.`training_config.json` is a serialized `sleap.TrainingJobConfig` that contains metadatalike what channels of the model correspond to what landmarks and etc.Top-down models have two stages: centroid and centered instance confidence maps, which we train and save out separately, so loading them together links them up into a single inference model.
###Code
# Top-down
predictor = sleap.load_model([
"centroid.fast.210504_182918.centroid.n=1800.zip",
"td_fast.210505_012601.centered_instance.n=1800.zip"
])
# Bottom-up
# predictor = sleap.load_model("bu.210506_230852.multi_instance.n=1800.zip")
###Output
_____no_output_____
###Markdown
The high level predictor creates all the SLEAP data structures after doing inference. For example:
###Code
labels = predictor.predict(video)
labels
###Output
_____no_output_____
###Markdown
Labels contain not just the predicted data, but all the other associated data structures and metadata:
###Code
labels.videos
labels.skeletons
###Output
_____no_output_____
###Markdown
Individual labeled frames are accessible through a list-like interface:
###Code
labeled_frame = labels[0] # shortcut for labels.labeled_frames[0]
labeled_frame
###Output
_____no_output_____
###Markdown
Convenient methods allow for easy inspection:
###Code
labels[0].plot(scale=0.5)
###Output
_____no_output_____
###Markdown
The labeled frame is itself a container for instances:
###Code
labeled_frame.instances
instance = labeled_frame[0] # shortcut for labeled_frame.instances[0]
instance
###Output
_____no_output_____
###Markdown
Finally, instances are containers for points:
###Code
instance.points
###Output
_____no_output_____
###Markdown
These can be converted into concrete arrays:
###Code
pts = instance.numpy()
print(pts)
###Output
[[234.24438477 430.52001953]
[271.58944702 436.14611816]
[308.0289917 438.57119751]
[321.81674194 440.08728027]
[322.01965332 436.77008057]
[246.14302063 450.56182861]
[242.26322937 413.94976807]
[285.78167725 459.91564941]
[272.27996826 406.71759033]
[ nan nan]
[317.59976196 430.60525513]
[242.10380554 441.94561768]
[245.32002258 420.93609619]]
###Markdown
Images can be embedded together with the predictions in the same format:
###Code
labels = sleap.Labels(labels.labeled_frames[:4]) # crop to the first few labels for this example
labels.save("labels_with_images.pkg.slp", with_images=True, embed_all_labeled=True)
###Output
_____no_output_____
###Markdown
Let's delete the source data:
###Code
!rm "[email protected]"
###Output
_____no_output_____
###Markdown
And check out what happens when we load in some labels with embedded images:
###Code
labels = sleap.load_file("labels_with_images.pkg.slp")
labels
labels[0].plot(scale=0.5)
###Output
_____no_output_____
###Markdown
Data structuresIn this notebook, we will explore some of the major data structures used in SLEAP and how they can be manipulated when generating predictions from trained models.A quick overview of the data structures before we start:- `Point`/`PredictedPoint` → Contains the `x` and `y` coordinates (and `score` for predictions) of a landmark.- `Instance`/`PredictedInstance` → Contains a set of `Point`/`PredictedPoint`s. This represent a single individual within a frame and may also contain an associated `Track`.- `Skeleton` → Defines the nodes and edges that define the set of unique landmark types that each point represents, e.g., "head", "tail", etc. This *does not contain positions* -- those are stored in individual `Point`s.- `LabeledFrame` → Contains a set of `Instance`/`PredictedInstance`s for a single frame.- `Labels` → Contains a set of `LabeledFrame`s and the associated metadata for the videos and other information related to the project or predictions. 1. Setup SLEAP and dataWe'll start by installing SLEAP and downloading some data and models to play around with.If you get a dependency error in subsequent cells, just click **Runtime** → **Restart runtime** to reload the packages.
###Code
# This should take care of all the dependencies on colab:
!pip uninstall -y opencv-python opencv-contrib-python && pip install sleap
# But to do it locally, we'd recommend the conda package (available on Windows + Linux):
# conda create -n sleap -c sleap -c conda-forge -c nvidia sleap
# Test video:
!wget https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.mp4
# Test video labels (from predictions/not necessary for inference benchmarking):
!wget https://storage.googleapis.com/sleap-data/reference/flies13/190719_090330_wt_18159206_rig1.2%4015000-17560.slp
# Bottom-up model:
# !wget https://storage.googleapis.com/sleap-data/reference/flies13/bu.210506_230852.multi_instance.n%3D1800.zip
# Top-down model (two-stage):
!wget https://storage.googleapis.com/sleap-data/reference/flies13/centroid.fast.210504_182918.centroid.n%3D1800.zip
!wget https://storage.googleapis.com/sleap-data/reference/flies13/td_fast.210505_012601.centered_instance.n%3D1800.zip
!ls -lah
import sleap
# This prevents TensorFlow from allocating all the GPU memory, which leads to issues on
# some GPUs/platforms:
sleap.disable_preallocation()
# This would hide GPUs from the TensorFlow altogether:
# sleap.use_cpu_only()
# Print some info:
sleap.versions()
sleap.system_summary()
###Output
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
SLEAP: 1.2.2
TensorFlow: 2.8.0
Numpy: 1.21.5
Python: 3.7.13
OS: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
GPUs: 1/1 available
Device: /physical_device:GPU:0
Available: True
Initalized: False
Memory growth: True
###Markdown
2. Data structures and inference SLEAP can read videos in a variety of different formats through the `sleap.load_video` high level API. Once loaded, the `sleap.Video` object allows you to access individual frames as if the it were a standard numpy array.**Note:** The actual frames are not loaded until you access them so we don't blow up our memory when using long videos.
###Code
# Videos can be represented agnostic to the backend format
video = sleap.load_video("[email protected]")
# sleap.Video objects have a numpy-like interface:
print(video.shape)
# And we can load images in the video using array indexing:
imgs = video[:4]
print(imgs.shape, imgs.dtype)
###Output
(2560, 1024, 1024, 1)
(4, 1024, 1024, 1) uint8
###Markdown
The high level interface for loading models (`sleap.load_model()`) takes model folders or zipped folders as input. These are outputs from our training procedure and need to contain a `"best_model.h5"` and `"training_config.json"`. `best_model.h5` is an HDF5-serialized tf.keras.Model that was checkpointed duringtraining. It includes the architecture as well as the weights, so they're standaloneand don't need SLEAP -- BUT they do not contain the inference methods.`training_config.json` is a serialized `sleap.TrainingJobConfig` that contains metadatalike what channels of the model correspond to what landmarks and etc.Top-down models have two stages: centroid and centered instance confidence maps, which we train and save out separately, so loading them together links them up into a single inference model.
###Code
# Top-down
predictor = sleap.load_model([
"centroid.fast.210504_182918.centroid.n=1800.zip",
"td_fast.210505_012601.centered_instance.n=1800.zip"
])
# Bottom-up
# predictor = sleap.load_model("bu.210506_230852.multi_instance.n=1800.zip")
###Output
_____no_output_____
###Markdown
The high level predictor creates all the SLEAP data structures after doing inference. For example:
###Code
labels = predictor.predict(video)
labels
###Output
_____no_output_____
###Markdown
Labels contain not just the predicted data, but all the other associated data structures and metadata:
###Code
labels.videos
labels.skeletons
###Output
_____no_output_____
###Markdown
Individual labeled frames are accessible through a list-like interface:
###Code
labeled_frame = labels[0] # shortcut for labels.labeled_frames[0]
labeled_frame
###Output
_____no_output_____
###Markdown
Convenient methods allow for easy inspection:
###Code
labels[0].plot(scale=0.5)
###Output
_____no_output_____
###Markdown
The labeled frame is itself a container for instances:
###Code
labeled_frame.instances
instance = labeled_frame[0] # shortcut for labeled_frame.instances[0]
instance
###Output
_____no_output_____
###Markdown
Finally, instances are containers for points:
###Code
instance.points
###Output
_____no_output_____
###Markdown
These can be converted into concrete arrays:
###Code
pts = instance.numpy()
print(pts)
###Output
[[234.24438477 430.52001953]
[271.58944702 436.14611816]
[308.0289917 438.57119751]
[321.81674194 440.08728027]
[322.01965332 436.77008057]
[246.14302063 450.56182861]
[242.26322937 413.94976807]
[285.78167725 459.91564941]
[272.27996826 406.71759033]
[ nan nan]
[317.59976196 430.60525513]
[242.10380554 441.94561768]
[245.32002258 420.93609619]]
###Markdown
Images can be embedded together with the predictions in the same format:
###Code
labels = sleap.Labels(labels.labeled_frames[:4]) # crop to the first few labels for this example
labels.save("labels_with_images.pkg.slp", with_images=True, embed_all_labeled=True)
###Output
_____no_output_____
###Markdown
Let's delete the source data:
###Code
!rm "[email protected]"
###Output
_____no_output_____
###Markdown
And check out what happens when we load in some labels with embedded images:
###Code
labels = sleap.load_file("labels_with_images.pkg.slp")
labels
labels[0].plot(scale=0.5)
###Output
_____no_output_____ |
build/lib/bacillusme/analysis/ion_stress.ipynb | ###Markdown
Ion stress response
###Code
from __future__ import print_function, division, absolute_import
import sys
import qminospy
from qminospy.me2 import ME_NLP
# python imports
from copy import copy
import re
from os.path import join
from collections import defaultdict
import pickle
# third party imports
import pandas
import cobra
from tqdm import tqdm
import numpy as np
import scipy
import matplotlib.pyplot as plt
# COBRAme
import cobrame
from cobrame.util import building, mu, me_model_interface
from cobrame.io.json import save_json_me_model, save_reduced_json_me_model
# ECOLIme
import ecolime
from ecolime import (transcription, translation, flat_files, generics, formulas, compartments)
from ecolime.util.helper_functions import *
%load_ext autoreload
%autoreload 2
print(cobra.__file__)
print(cobrame.__file__)
print(ecolime.__file__)
gene_dictionary = pd.read_csv('gene_name_dictionary.csv',index_col=1)
ions = ['na1_e','ca2_e','zn2_e','k_e','mg2_e','mn2_e']
# ions = ['mg2_e']
###Output
_____no_output_____
###Markdown
Load
###Code
eco_directory = join(flat_files.ecoli_files_dir, 'iJO1366.json')
ijo_directory = join(flat_files.ecoli_files_dir, 'iYO844.json')
uni_directory = join(flat_files.ecoli_files_dir, 'universal_model.json')
eco = cobra.io.load_json_model(eco_directory)
bsub = cobra.io.load_json_model(ijo_directory)
uni = cobra.io.load_json_model(uni_directory)
bsub.optimize()
base = bsub.solution.x_dict
base_mu = bsub.solution.f
###Output
_____no_output_____
###Markdown
M-model simulations
###Code
import itertools
marker = itertools.cycle(('v', 's', '^', 'o', '*'))
ion_rates = -np.arange(0,10,0.1)*1e-6
for ion in ['na1_e']:
base_flux = base['EX_'+ion]
gr = []
for rate in tqdm(ion_rates):
ex = bsub.reactions.get_by_id('EX_'+ion)
ex.lower_bound = rate
ex.upper_bound = rate
bsub.optimize()
gr.append(bsub.solution.f)
plt.plot(-ion_rates,gr,label=ion,marker=next(marker),markersize=8)
plt.legend()
###Output
100%|██████████| 100/100 [00:09<00:00, 11.05it/s]
###Markdown
ME-model simulations
###Code
with open('../me_models/solution.pickle', 'rb') as solution:
me = pickle.load(solution)
for ion in ions:
print(ion, me.solution.x_dict['EX_'+ion])
###Output
na1_e 0.0
ca2_e 0.0
zn2_e -1.833244939576966e-07
k_e -1.8080572856545358e-07
mg2_e -0.0013002379702828882
mn2_e -2.3718242480067265e-06
###Markdown
Add those reactions that account for osmosis
###Code
# Add a copy of transport reactions that do not need a transporter
for ion in ions:
uptake_rxns = get_transport_reactions(me,ion,comps=['e','c'],verbose=0)
osm_rxns = []
print('\n',ion)
for rxn in uptake_rxns:
stoich = rxn.stoichiometric_data.stoichiometry
direction = '_FWD' if 'FWD' in rxn.id else '_REV'
osm_id = rxn.id.split(direction)[0]+'_osm'
ion_position = stoich[ion] < 0
ub = ion_position * 1000
lb = (not ion_position) * -1000
if not hasattr(me.reactions,osm_id):
osm_rxn = cobrame.MEReaction(osm_id)
me.add_reaction(osm_rxn)
osm_rxn.add_metabolites(stoich)
osm_rxn.lower_bound=lb
osm_rxn.upper_bound=ub
osm_rxns.append(osm_rxn)
print(osm_rxn.id,osm_rxn.lower_bound,osm_rxn.upper_bound,osm_rxn.reaction)
###Output
na1_e
GLUt4_osm 0 1000 glu__L_e + na1_e <=> glu__L_c + na1_c
BILEt4_osm 0 1000 bilea_e + na1_e <=> bilea_c + na1_c
MALt4_osm 0 1000 mal__L_e + na1_e <=> mal__L_c + na1_c
PIt7_osm 0 1000 3.0 na1_e + pi_e <=> 3.0 na1_c + pi_c
ca2_e
CAt4_osm -1000 0 ca2_c + h_e <=> ca2_e + h_c
CITt14_osm 0 1000 ca2_e + cit_e + h_e <=> ca2_c + cit_c + h_c
zn2_e
ZNabc_osm 0 1000 atp_c + h2o_c + zn2_e <=> adp_c + h_c + pi_c + zn2_c
CITt15_osm 0 1000 cit_e + h_e + zn2_e <=> cit_c + h_c + zn2_c
k_e
Kt2r_osm 0 1000 h_e + k_e <=> h_c + k_c
CD2t4_osm 0 1000 cd2_c + h_e + k_e <=> cd2_e + h_c + k_c
ZN2t4_osm 0 1000 h_e + k_e + zn2_c <=> h_c + k_c + zn2_e
Kt1_osm 0 1000 k_e <=> k_c
mg2_e
MGt5_osm -1000 0 mg2_c <=> mg2_e
CITt10_osm 0 1000 cit_e + h_e + mg2_e <=> cit_c + h_c + mg2_c
ICITt10_osm 0 1000 h_e + icit_e + mg2_e <=> h_c + icit_c + mg2_c
mn2_e
MNt2_osm 0 1000 h_e + mn2_e <=> h_c + mn2_c
MNabc_osm 0 1000 atp_c + h2o_c + mn2_e <=> adp_c + h_c + mn2_c + pi_c
CITt11_osm 0 1000 cit_e + h_e + mn2_e <=> cit_c + h_c + mn2_c
###Markdown
Add ion uptake and exit separately
###Code
for ion in ions:
old_ion = me.metabolites.get_by_id(ion)
ion_base = ion.split('_')[0]
# Close old exchange
me.reactions.get_by_id('EX_{}'.format(ion)).lower_bound = 0
me.reactions.get_by_id('EX_{}'.format(ion)).upper_bound = 0
# Create new in/out metabolites
ion_in = cobrame.Metabolite(id='{}_in'.format(ion_base))
ion_out = cobrame.Metabolite(id='{}_out'.format(ion_base))
# Ion uptake (creation, all open)
rxn = cobrame.MEReaction(id='EX_{}_in'.format(ion_base))
rxn.add_metabolites({
ion_in:-1.0
})
me.add_reaction(rxn)
rxn.lower_bound = -1000
rxn.upper_bound = 0
# Ion exit
rxn = cobrame.MEReaction(id='DM_{}_out'.format(ion_base))
rxn.add_metabolites({
ion_out:-1.0
})
rxn.lower_bound = 0
rxn.upper_bound = 1000
me.add_reaction(rxn)
# Replace old met
uptake_rxns = get_transport_reactions(me,ion,comps=['e','c'],verbose=0)
exit_rxns = get_transport_reactions(me,ion,comps=['c','e'],verbose=0)
for rxn in uptake_rxns:
coeff = rxn.pop(old_ion)
rxn.add_metabolites({ion_in:coeff})
for rxn in exit_rxns:
coeff = rxn.pop(old_ion)
rxn.add_metabolites({ion_out:coeff})
#print('\n', ion)
_=get_reactions_of_met(me,ion_in.id)
_=get_reactions_of_met(me,ion_out.id)
def single_flux_response(me,rate,ion,mu_fix=False,verbosity=0):
ion_base = ion.split('_')[0]
me.reactions.get_by_id('EX_{}_in'.format(ion_base)).lower_bound = rate
me.reactions.get_by_id('EX_{}_in'.format(ion_base)).upper_bound = rate
solve_me_model(me, max_mu = 0.5, min_mu = .05, using_soplex=False,
precision = 1e-6,verbosity=verbosity,mu_fix=mu_fix)
try:
x_dict = me.solution.x_dict
except:
x_dict = {'status':0}
return rate, x_dict
###Output
_____no_output_____
###Markdown
Small fluxes
###Code
# Calculation at several ion uptake rates
ion_rates_dict = {}
ion_fractions = np.arange(0,2,0.2)
for ion in ions:
base_flux = me.solution.x_dict['EX_'+ion]
if base_flux:
ion_rates_dict[ion] = ion_fractions*base_flux
else:
ion_rates_dict[ion] = ion_fractions*-0.2e-7
# ion_rates_dict[ion] = ion_fractions*-0.2e-7
print('Ions to include: {}'.format(ions))
print('Rates to use: {}'.format(ion_rates_dict))
ion_result_macrodict = dict()
import multiprocessing as mp
NP = min([len(ion_fractions),10])
# Parallel processing
pbar = tqdm(total=len(ions)*len(ion_fractions))
for ion in ions:
flux_dict = {}
ion_rates = ion_rates_dict[ion]
pbar.set_description('Calculating {} ({} threads)'.format(ion,NP))
def collect_result(result):
pbar.update(1)
flux_dict[result[0]] = result[1]
pool = mp.Pool(NP)
for rate in ion_rates:
pool.apply_async(single_flux_response, args=(me,rate,ion), callback=collect_result)
pool.close()
pool.join()
flux_responses_me = pd.DataFrame.from_dict(flux_dict)
flux_responses_me = flux_responses_me[sorted(flux_responses_me.columns)]
ion_result_macrodict[ion] = flux_responses_me
# Write
for ion in ions:
ion_result_macrodict[ion].to_csv('{}_flux_responses.csv'.format(ion))
# Read
for ion in ions:
ion_result_macrodict[ion] = pd.read_csv('{}_flux_responses.csv'.format(ion),index_col=0)
import itertools
marker = itertools.cycle(('v', 's', '^', 'o', '*'))
fig,axes = plt.subplots(round(len(ions)/3),3,figsize=(13,round(len(ions))))
axes = axes.flatten()
plt.figure(figsize=(5,4))
for idx,ion in enumerate(ions):
ion_base = ion.split('_')[0]
flux_responses_me = ion_result_macrodict[ion]
fluxes = (-flux_responses_me.loc['EX_{}_in'.format(ion_base)])
axes[idx].plot(fluxes,flux_responses_me.loc['biomass_dilution'],
label = ion,marker = next(marker),markersize=8)
axes[idx].set_xlabel('Ion uptake')
axes[idx].set_ylabel('Growth rate')
axes[idx].set_title(ion)
fig.tight_layout()
#plt.legend()
#plt.tight_layout()
###Output
_____no_output_____
###Markdown
It appers that the icnrease davailability of ionstends to favor growth for Na and Ca. Zn does not change much. Potassium seems to greatly decrease. Is it due to the transporter expression?
###Code
plt.figure(figsize=(10,5))
marker = itertools.cycle(('v', 's', '^', 'o', '*'))
for idx,ion in enumerate(ions):
ion_base = ion.split('_')[0]
plt.subplot(2,3,idx+1)
flux_responses_me = pd.DataFrame.from_dict(ion_result_macrodict[ion])
flux_responses_me = flux_responses_me[sorted(flux_responses_me.columns)]
uptake_rxns = get_transport_reactions(me,ion.replace('e','c'),comps=['in','c'],verbose=0)
exit_rxns = get_transport_reactions(me,ion.replace('e','c'),comps=['c','out'],verbose=0)
transport_rxns = uptake_rxns + exit_rxns
for rxn in exit_rxns:
if not hasattr(rxn,'complex_data'):
continue
complex_id = rxn.complex_data.complex.id
formation_id = 'formation_{}'.format(complex_id)
plt.plot(-flux_responses_me.loc['EX_{}_in'.format(ion_base)],
flux_responses_me.loc[formation_id]/flux_responses_me.loc['biomass_dilution'],
label = complex_id,marker = next(marker),markersize=8)
plt.xlabel('Ion uptake')
plt.title(ion)
#plt.legend()
plt.tight_layout()
ion = 'k_c'
flux_responses_me = pd.DataFrame.from_dict(ion_result_macrodict[ion])
flux_responses_me = flux_responses_me[sorted(flux_responses_me.columns)]
transport_rxns = get_reactions_of_met(me,ion.replace('_c','_e'),verbose=0)
for rxn in transport_rxns:
if not hasattr(rxn,'complex_data'):
continue
complex_id = rxn.complex_data.complex.id
formation_id = 'formation_{}'.format(complex_id)
plt.plot(-flux_responses_me.loc['EX_{}_osm'.format(ion)],flux_responses_me.loc[formation_id],
label = formation_id)
plt.xlabel('Ion uptake')
plt.legend(bbox_to_anchor=(1, 1))
###Output
_____no_output_____
###Markdown
Big fluxes
###Code
# Calculation at several ion uptake rates
ion_rates_dict = {}
ion_fractions = -np.arange(0,2,0.2)
for ion in ions:
ion_rates_dict[ion] = ion_fractions
# ion_rates_dict[ion] = ion_fractions*-0.2e-7
print('Ions to include: {}'.format(ions))
print('Rates to use: {}'.format(ion_rates_dict))
ion_result_macrodict = dict()
import multiprocessing as mp
NP = min([len(ion_fractions),10])
# Parallel processing
pbar = tqdm(total=len(ions)*len(ion_fractions))
for ion in ions:
flux_dict = {}
ion_rates = ion_rates_dict[ion]
pbar.set_description('Calculating {} ({} threads)'.format(ion,NP))
def collect_result(result):
pbar.update(1)
flux_dict[result[0]] = result[1]
pool = mp.Pool(NP)
for rate in ion_rates:
pool.apply_async(single_flux_response, args=(me,rate,ion), callback=collect_result)
pool.close()
pool.join()
flux_responses_me = pd.DataFrame.from_dict(flux_dict)
flux_responses_me = flux_responses_me[sorted(flux_responses_me.columns)]
ion_result_macrodict[ion] = flux_responses_me
# Write
for ion in ions:
ion_result_macrodict[ion].to_csv('{}_big_flux_responses.csv'.format(ion))
# Read
for ion in ions:
ion_result_macrodict[ion] = pd.read_csv('{}_big_flux_responses.csv'.format(ion),index_col=0)
import itertools
marker = itertools.cycle(('v', 's', '^', 'o', '*'))
fig,axes = plt.subplots(round(len(ions)/3),3,figsize=(13,round(len(ions))))
axes = axes.flatten()
plt.figure(figsize=(5,4))
for idx,ion in enumerate(ions):
ion_base = ion.split('_')[0]
try:
flux_responses_me = ion_result_macrodict[ion]
fluxes = (-flux_responses_me.loc['EX_{}_in'.format(ion_base)])
axes[idx].plot(fluxes,flux_responses_me.loc['biomass_dilution'],
label = ion,marker = next(marker),markersize=8)
except:
pass
axes[idx].set_xlabel('Ion uptake')
axes[idx].set_ylabel('Growth rate')
axes[idx].set_title(ion)
fig.tight_layout()
#plt.legend()
#plt.tight_layout()
rxn
# Visualize protein expression profiles
plt.figure(figsize=(15,4))
import itertools
marker = itertools.cycle(('v', 's', '^', 'o', '*'))
flux_responses_me[abs(flux_responses_me)<1e-20] = 0
plt.figure(figsize=(12,4))
plt.subplots_adjust(wspace=0.3)
plt.subplot(1,2,1)
genes = ['ktrB','ktrA']
for gene_name,locus_id in gene_dictionary.loc[genes]['locus_id'].items():
expression = flux_responses_me.loc['translation_'+locus_id]
expression /= np.max(expression)
plt.plot(-flux_responses_me.loc['EX_k_in_osm'],expression,
label=gene_name,marker = next(marker),markersize=8)
plt.legend()
plt.xlabel('Sodium uptake')
plt.ylabel('Protein expression')
plt.title('Protein: K+ transporter KtrAB')
plt.subplot(1,2,2)
genes = ['ktrB','ktrA']
for gene_name,locus_id in gene_dictionary.loc[genes]['locus_id'].items():
expression = flux_responses_me.loc['translation_'+locus_id]
expression /= np.max(expression)
plt.plot(-flux_responses_me.loc['EX_k_c_osm'],expression,
label=gene_name,marker = next(marker),markersize=8)
plt.legend()
plt.xlabel('Sodium uptake')
plt.ylabel('Protein expression')
plt.title(genes)
###Output
_____no_output_____ |
dvc-3-automate-experiments.ipynb | ###Markdown
Install and init DVCPrerequisites: - DVC and requirements.txt packages installed (if not - check README.md file for instructions)- A project repository is a Git repo Checkout branch `tutorial` ```bashgit checkout -b dvc-tutorial``` Initialize DVCReferences: - https://dvc.org/doc/get-started/initialize ```bashdvc init``` Commit changes ```bashgit add .git commit -m "Initialize DVC"``` Build automated pipelines Create `data_load` stage ```bash Create `data` directorymkdir -p data``` ```bash Create data_load pipeline stagedvc run -n data_load \ -d src/data_load.py \ -o data/iris.csv \ -o data/classes.json \ -p data_load \ python src/data_load.py \ --config=params.yaml```
###Code
%%bash
du -sh data/*
# Note: we use `tree -I ...` pattern to not list those files that match the wild-card pattern.
!tree -I dvc-venv
###Output
[01;34m.[00m
├── README.md
├── dvc-3-automate-experiments.ipynb
├── params.yaml
├── requirements.txt
└── [01;34msrc[00m
├── __init__.py
├── data_load.py
├── evaluate.py
├── featurization.py
├── split_dataset.py
└── train.py
1 directory, 10 files
###Markdown
dvc.yaml
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
###Markdown
params.yaml
###Code
!cat params.yaml
###Output
data_load:
raw_data_path: data/iris.csv
classes_names_path: data/classes.json
featurize:
features_path: data/iris_featurized.csv
target_column: target
data_split:
test_size: 0.2
train_path: data/train.csv
test_path: data/test.csv
train:
model_path: data/model.joblib
evaluate:
metrics_file: data/metrics.json
confusion_matrix: data/cm.csv
###Markdown
Reproduce a pipeline
###Code
!dvc repro
###Output
Stage 'data_load' is cached - skipping run, checking out outputs core[39m>
[0m
###Markdown
Change params.yaml and reproduce Add a new line into `data_load` section: `dummy_param: dummy_value`
###Code
!dvc repro
###Output
Running stage 'data_load' with command: core[39m>
python src/data_load.py --config=params.yaml
Updating lock file 'dvc.lock' core[39m>
To track the changes with git, run:
git add dvc.lock
[0m
###Markdown
Build end-to-end Machine Learning pipelineStages - extract features - split dataset - train - evaluate Add feature extraction stage ```bashdvc run -n feature_extraction \ -d src/featurization.py \ -d data/iris.csv \ -o data/iris_featurized.csv \ -p data_load,featurize \ python src/featurization.py \ --config=params.yaml```
###Code
!ls
!cat dvc.yaml
import pandas as pd
features = pd.read_csv('data/iris_featurized.csv')
features.head()
###Output
_____no_output_____
###Markdown
Commit changes ```bash Check Git statusgit status -s``` ```bash Commit changes git add .git commit -m "Add stage features_extraction"``` Add split train/test stage ```bashdvc run -n split_dataset \ -d src/split_dataset.py \ -d data/iris_featurized.csv \ -o data/train.csv \ -o data/test.csv \ -p featurize,data_split \ python src/split_dataset.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage split_dataset"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
###Markdown
Add train stage ```bashdvc run -n train \ -d src/train.py \ -d data/train.csv \ -o data/model.joblib \ -p data_split,train \ python src/train.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage train"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
train:
cmd: python src/train.py --config=params.yaml
deps:
- data/train.csv
- src/train.py
params:
- data_split
- train
outs:
- data/model.joblib
###Markdown
Add evaluate stage ```bashdvc run -n evaluate \ -d src/evaluate.py \ -d data/test.csv \ -d data/model.joblib \ -d data/classes.json \ -m data/metrics.json \ --plots data/cm.csv \ -p data_load,data_split,train,evaluate \ python src/evaluate.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage evaluate"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
train:
cmd: python src/train.py --config=params.yaml
deps:
- data/train.csv
- src/train.py
params:
- data_split
- train
outs:
- data/model.joblib
evaluate:
cmd: python src/evaluate.py --config=params.yaml
deps:
- data/classes.json
- data/model.joblib
- data/test.csv
- src/evaluate.py
params:
- data_load
- data_split
- evaluate
- train
metrics:
- data/metrics.json
plots:
- data/cm.csv
###Markdown
Experimenting with reproducible pipelines How reproduce experiments? > The most exciting part of DVC is reproducibility.>> Reproducibility is the time you are getting benefits out of DVC instead of spending time defining the ML pipelines.> DVC tracks all the dependencies, which helps you iterate on ML models faster without thinking what was affected by your last change.>> In order to track all the dependencies, DVC finds and reads ALL the DVC-files in a repository and builds a dependency graph (DAG) based on these files.> This is one of the differences between DVC reproducibility and traditional Makefile-like build automation tools (Make, Maven, Ant, Rakefile etc). It was designed in such a way to localize specification of DAG nodes.If you run repro on any created DVC-file from our repository, nothing happens because nothing was changed in the defined pipeline.(c) dvc.org https://dvc.org/doc/tutorial/reproducibility
###Code
# Nothing to reproduce
!dvc repro
###Output
Stage 'data_load' didn't change, skipping core[39m>
Stage 'feature_extraction' didn't change, skipping
Stage 'split_dataset' didn't change, skipping
Stage 'train' didn't change, skipping
Stage 'evaluate' didn't change, skipping
Data and pipelines are up to date.
[0m
###Markdown
Experiment 1: Add features Create new experiment branchBefore editing the code/featurization.py file, please create and checkout a new branch __ratio_features__ ```bash Create new branchgit checkout -b exp1-ratio-featuresgit branch``` Update featurization.py in file __featurization.py__ in function`get_features()` after line ```python features = dataset.copy()```add lines:```python features['sepal_length_to_sepal_width'] = features['sepal_length'] / features['sepal_width'] features['petal_length_to_petal_width'] = features['petal_length'] / features['petal_width']``` Reproduce pipeline
###Code
!dvc repro
# Check features used in this pipeline
import pandas as pd
features = pd.read_csv('data/iris_featurized.csv')
features.head()
!git status
# Get difference with metric from previous pipeline
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.15385 0.15385 0.0
[0m
###Markdown
Commit the experiment changes ```bash Commit changesgit add .git commit -m "Experiment with new features"git tag -a "exp1_ratio_features" -m "Experiment with new features"``` Experiment 2: Tune Logistic Regression Create a new experiment branch ```bash Create new branch for experimentgit checkout -b exp2-tuning-logreggit branch```
###Code
# Nothing to reproduce since code was checked out by `git checkout`
# and data files were checked out by `dvc checkout`
!dvc repro
###Output
Stage 'data_load' didn't change, skipping core[39m>
Stage 'feature_extraction' didn't change, skipping
Stage 'split_dataset' didn't change, skipping
Stage 'train' didn't change, skipping
Stage 'evaluate' didn't change, skipping
Data and pipelines are up to date.
[0m
###Markdown
Tuning parametersin file __train.py__ :replace LogisticRegression params with:```python clf = LogisticRegression(C=0.01, solver='lbfgs', multi_class='multinomial', max_iter=100)```__Note__: here we changed logistic regresssion hyperparameters: C to 0.1https://dvc.org/doc/tutorials/get-started/experimentstuning-parameters Reproduce pipelines
###Code
# Re-run pipeline
!dvc repro
# Get difference with metric from previous pipeline
!cat data/metrics.json
!dvc metrics show
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.15385 0.93056 0.77671
[0m
###Markdown
Commit changes ```bash Commit changesgit add .git commit -m "Tune model. LogisticRegression. C=0.1"git tag -a "exp2_tuning_logreg" -m "Tune model. LogisticRegression. C=0.01"``` Experiment 3: Use SVM Create a new experiment branch ```bash Create a new experiment branch git checkout -b exp3-svm``` Update train.pyin file __train.py__ replace line```python clf = LogisticRegression(C=0.1, solver='newton-cg', multi_class='multinomial', max_iter=100)```with line```python clf = SVC(C=0.01, kernel='linear', gamma='scale', degree=5)``` Reproduce pipeline
###Code
!dvc repro
!dvc metrics show
!git status
###Output
On branch exp3-svm
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
[31mmodified: dvc.lock[m
[31mmodified: src/train.py[m
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
Commit changes ```bash Commit changesgit add .git commit -m "Experiment 3 with SVM estimator"git tag -a "exp3_svm" -m "Experiment 3 with SVM estimator"``` Merge best experiment `dvc-tutorial ` branch ```bash Merge the best experimentgit checkout dvc-tutorial git merge exp3_svm``` Compare experiment Compare params
###Code
# Get params diffs
!dvc params diff
# Compare parameters with a specific commit, a tag or any revision
!dvc params diff --all
!dvc params diff --show-json --all
!dvc params diff --show-md --all
# To see the difference between two specific commits, both need to be specified:
!git log
!dvc params diff 7619688214cc3b9fe3d3b59674c07c12fc134b47 HEAD^
###Output
[0m core[39m>
###Markdown
Show metrics
###Code
# this pipeline metrics
!dvc metrics show
# show all commited pipelines metrics (all branch and tags)
!dvc metrics show -a -T
###Output
dvc-tutorial: core[39m>
data/metrics.json:
f1_score: 0.9665831244778613
exp1-ratio-features:
data/metrics.json:
f1_score: 0.15384615384615383
exp2-tuning-logreg:
data/metrics.json:
f1_score: 0.9305555555555555
exp3-svm:
data/metrics.json:
f1_score: 0.9665831244778613
exp1_ratio_features:
data/metrics.json:
f1_score: 0.15384615384615383
exp2_tuning_logreg:
data/metrics.json:
f1_score: 0.9305555555555555
exp3_svm:
data/metrics.json:
f1_score: 0.9665831244778613
[0m
###Markdown
Compare metrics (get differences)
###Code
!dvc metrics diff
# --all - list all metrics, even those without changes
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.96658 0.96658 0.0
[0m
###Markdown
* чтобы сравнить текущую метрики из текущего коммита и из другого, нужно указать другой (old) коммит:
###Code
# Compare old and new branches
!dvc metrics diff exp1-ratio-features exp3-svm
# Equivalent to `!dvc metrics diff exp1-ratio-features dvc-tutorial`, because dvc-tutorial - current branch
!dvc metrics diff exp1-ratio-features
!dvc metrics diff exp1-ratio-features --show-md
###Output
| Path | Metric | Old | New | Change | core[39m>
|-------------------|----------|---------|---------|----------|
| data/metrics.json | f1_score | 0.15385 | 0.96658 | 0.81274 |
[0m
###Markdown
Build Plots
###Code
from IPython.display import IFrame
###Output
_____no_output_____
###Markdown
Show
###Code
!dvc plots show --template confusion "data/cm.csv" -x actual -y predicted -o data/plots-show.html
IFrame(src='data/plots-show.html', width=800, height=500)
###Output
_____no_output_____
###Markdown
Diff
###Code
# Build metircs plots for all 3 experiments
!dvc plots diff -t confusion -o data/plots-diff.html exp1-ratio-features exp3-svm -x predicted
IFrame(src='data/plots-diff.html', width=800, height=500)
###Output
_____no_output_____
###Markdown
Install and init DVCPrerequisites: - DVC and requirements.txt packages installed (if not - check README.md file for instructions)- A project repository is a Git repo Checkout branch `tutorial` ```bashgit checkout -b dvc-tutorial``` Initialize DVCReferences: - https://dvc.org/doc/get-started/initialize ```bashdvc init``` Commit changes ```bashgit add .git commit -m "Initialize DVC"``` Build automated pipelines Create `data_load` stage ```bash Create `data` directorymkdir -p data``` ```bash Create data_load pipeline stagedvc run -n data_load \ -d src/data_load.py \ -o data/iris.csv \ -o data/classes.json \ -p data_load \ python src/data_load.py \ --config=params.yaml```
###Code
%%bash
du -sh data/*
# Note: we use `tree -I ...` pattern to not list those files that match the wild-card pattern.
!tree -I dvc-venv
###Output
[01;34m.[00m
├── README.md
├── dvc-3-automate-experiments.ipynb
├── params.yaml
├── requirements.txt
└── [01;34msrc[00m
├── __init__.py
├── data_load.py
├── evaluate.py
├── featurization.py
├── split_dataset.py
└── train.py
1 directory, 10 files
###Markdown
dvc.yaml
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
###Markdown
params.yaml
###Code
!cat params.yaml
###Output
data_load:
raw_data_path: data/iris.csv
classes_names_path: data/classes.json
featurize:
features_path: data/iris_featurized.csv
target_column: target
data_split:
test_size: 0.2
train_path: data/train.csv
test_path: data/test.csv
train:
model_path: data/model.joblib
evaluate:
metrics_file: data/metrics.json
confusion_matrix: data/cm.csv
###Markdown
Reproduce a pipeline
###Code
!dvc repro
###Output
Stage 'data_load' is cached - skipping run, checking out outputs core[39m>
[0m
###Markdown
Change params.yaml and reproduce Add a new line into `data_load` section: `dummy_param: dummy_value`
###Code
!dvc repro
###Output
Running stage 'data_load' with command: core[39m>
python src/data_load.py --config=params.yaml
Updating lock file 'dvc.lock' core[39m>
To track the changes with git, run:
git add dvc.lock
[0m
###Markdown
Build end-to-end Machine Learning pipelineStages - extract features - split dataset - train - evaluate Add feature extraction stage ```bashdvc run -n feature_extraction \ -d src/featurization.py \ -d data/iris.csv \ -o data/iris_featurized.csv \ -p data_load,featurize \ python src/featurization.py \ --config=params.yaml```
###Code
!ls
!cat dvc.yaml
import pandas as pd
features = pd.read_csv('data/iris_featurized.csv')
features.head()
###Output
_____no_output_____
###Markdown
Commit changes ```bash Check Git statusgit status -s``` ```bash Commit changes git add .git commit -m "Add stage features_extraction"``` Add split train/test stage ```bashdvc run -n split_dataset \ -d src/split_dataset.py \ -d data/iris_featurized.csv \ -o data/train.csv \ -o data/test.csv \ -p featurize,data_split \ python src/split_dataset.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage split_dataset"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
###Markdown
Add train stage ```bashdvc run -n train \ -d src/train.py \ -d data/train.csv \ -o data/model.joblib \ -p data_split,train \ python src/train.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage train"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
train:
cmd: python src/train.py --config=params.yaml
deps:
- data/train.csv
- src/train.py
params:
- data_split
- train
outs:
- data/model.joblib
###Markdown
Add evaluate stage ```bashdvc run -n evaluate \ -d src/evaluate.py \ -d data/test.csv \ -d data/model.joblib \ -d data/classes.json \ -m data/metrics.json \ --plots data/cm.csv \ -p data_load,data_split,train,evaluate \ python src/evaluate.py \ --config=params.yaml``` ```bash Commit changesgit add .git commit -m "Add stage evaluate"```
###Code
!cat dvc.yaml
###Output
stages:
data_load:
cmd: python src/data_load.py --config=params.yaml
deps:
- src/data_load.py
params:
- data_load
outs:
- data/classes.json
- data/iris.csv
feature_extraction:
cmd: python src/featurization.py --config=params.yaml
deps:
- data/iris.csv
- src/featurization.py
params:
- data_load
- featurize
outs:
- data/iris_featurized.csv
split_dataset:
cmd: python src/split_dataset.py --config=params.yaml
deps:
- data/iris_featurized.csv
- src/split_dataset.py
params:
- data_split
- featurize
outs:
- data/test.csv
- data/train.csv
train:
cmd: python src/train.py --config=params.yaml
deps:
- data/train.csv
- src/train.py
params:
- data_split
- train
outs:
- data/model.joblib
evaluate:
cmd: python src/evaluate.py --config=params.yaml
deps:
- data/classes.json
- data/model.joblib
- data/test.csv
- src/evaluate.py
params:
- data_load
- data_split
- evaluate
- train
metrics:
- data/metrics.json
plots:
- data/cm.csv
###Markdown
Experimenting with reproducible pipelines How reproduce experiments? > The most exciting part of DVC is reproducibility.>> Reproducibility is the time you are getting benefits out of DVC instead of spending time defining the ML pipelines.> DVC tracks all the dependencies, which helps you iterate on ML models faster without thinking what was affected by your last change.>> In order to track all the dependencies, DVC finds and reads ALL the DVC-files in a repository and builds a dependency graph (DAG) based on these files.> This is one of the differences between DVC reproducibility and traditional Makefile-like build automation tools (Make, Maven, Ant, Rakefile etc). It was designed in such a way to localize specification of DAG nodes.If you run repro on any created DVC-file from our repository, nothing happens because nothing was changed in the defined pipeline.(c) dvc.org https://dvc.org/doc/tutorial/reproducibility
###Code
# Nothing to reproduce
!dvc repro
###Output
Stage 'data_load' didn't change, skipping core[39m>
Stage 'feature_extraction' didn't change, skipping
Stage 'split_dataset' didn't change, skipping
Stage 'train' didn't change, skipping
Stage 'evaluate' didn't change, skipping
Data and pipelines are up to date.
[0m
###Markdown
Experiment 1: Add features Create new experiment branchBefore editing the code/featurization.py file, please create and checkout a new branch __ratio_features__ ```bash Create new branchgit checkout -b exp1-ratio-featuresgit branch``` Update featurization.py in file __featurization.py__ in function`get_features()` after line ```python features = dataset.copy()```add lines:```python features['sepal_length_to_sepal_width'] = features['sepal_length'] / features['sepal_width'] features['petal_length_to_petal_width'] = features['petal_length'] / features['petal_width']``` Reproduce pipeline
###Code
!dvc repro
# Check features used in this pipeline
import pandas as pd
features = pd.read_csv('data/iris_featurized.csv')
features.head()
!git status
# Get difference with metric from previous pipeline
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.15385 0.15385 0.0
[0m
###Markdown
Commit the experiment changes ```bash Commit changesgit add .git commit -m "Experiment with new features"git tag -a "exp1_ratio_features" -m "Experiment with new features"``` Experiment 2: Tune Logistic Regression Create a new experiment branch ```bash Create new branch for experimentgit checkout -b exp2-tuning-logreggit branch```
###Code
# Nothing to reproduce since code was checked out by `git checkout`
# and data files were checked out by `dvc checkout`
!dvc repro
###Output
Stage 'data_load' didn't change, skipping core[39m>
Stage 'feature_extraction' didn't change, skipping
Stage 'split_dataset' didn't change, skipping
Stage 'train' didn't change, skipping
Stage 'evaluate' didn't change, skipping
Data and pipelines are up to date.
[0m
###Markdown
Tuning parametersin file __train.py__ :replace LogisticRegression params with:```python clf = LogisticRegression(C=0.01, solver='lbfgs', multi_class='multinomial', max_iter=100)```__Note__: here we changed logistic regresssion hyperparameters: C to 0.1https://dvc.org/doc/tutorials/get-started/experimentstuning-parameters Reproduce pipelines
###Code
# Re-run pipeline
!dvc repro
# Get difference with metric from previous pipeline
!cat data/metrics.json
!dvc metrics show
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.15385 0.93056 0.77671
[0m
###Markdown
Commit changes ```bash Commit changesgit add .git commit -m "Tune model. LogisticRegression. C=0.1"git tag -a "exp2_tuning_logreg" -m "Tune model. LogisticRegression. C=0.01"``` Experiment 3: Use SVM Create a new experiment branch ```bash Create a new experiment branch git checkout -b exp3-svm``` Update train.pyin file __train.py__ replace line```python clf = LogisticRegression(C=0.1, solver='newton-cg', multi_class='multinomial', max_iter=100)```with line```python clf = SVC(C=0.01, kernel='linear', gamma='scale', degree=5)``` Reproduce pipeline
###Code
!dvc repro
!dvc metrics show
!git status
###Output
On branch exp3-svm
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
[31mmodified: dvc.lock[m
[31mmodified: src/train.py[m
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
Commit changes ```bash Commit changesgit add .git commit -m "Experiment 3 with SVM estimator"git tag -a "exp3_svm" -m "Experiment 3 with SVM estimator"``` Merge best experiment `dvc-tutorial ` branch ```bash Merge the best experimentgit checkout dvc-tutorial git merge exp3_svm``` Compare experiment Compare params
###Code
# Get params diffs
!dvc params diff
# Compare parameters with a specific commit, a tag or any revision
!dvc params diff --all
!dvc params diff --show-json --all
!dvc params diff --show-md --all
# To see the difference between two specific commits, both need to be specified:
!git log
!dvc params diff 7619688214cc3b9fe3d3b59674c07c12fc134b47 HEAD^
###Output
[0m core[39m>
###Markdown
Show metrics
###Code
# this pipeline metrics
!dvc metrics show
# show all commited pipelines metrics (all branch and tags)
!dvc metrics show -a -T
###Output
dvc-tutorial: core[39m>
data/metrics.json:
f1_score: 0.9665831244778613
exp1-ratio-features:
data/metrics.json:
f1_score: 0.15384615384615383
exp2-tuning-logreg:
data/metrics.json:
f1_score: 0.9305555555555555
exp3-svm:
data/metrics.json:
f1_score: 0.9665831244778613
exp1_ratio_features:
data/metrics.json:
f1_score: 0.15384615384615383
exp2_tuning_logreg:
data/metrics.json:
f1_score: 0.9305555555555555
exp3_svm:
data/metrics.json:
f1_score: 0.9665831244778613
[0m
###Markdown
Compare metrics (get differences)
###Code
!dvc metrics diff
# --all - list all metrics, even those without changes
!dvc metrics diff --all
###Output
Path Metric Old New Change core[39m>
data/metrics.json f1_score 0.96658 0.96658 0.0
[0m
###Markdown
* чтобы сравнить текущую метрики из текущего коммита и из другого, нужно указать другой (old) коммит:
###Code
# Compare old and new branches
!dvc metrics diff exp1-ratio-features exp3-svm
# Equivalent to `!dvc metrics diff exp1-ratio-features dvc-tutorial`, because dvc-tutorial - current branch
!dvc metrics diff exp1-ratio-features
!dvc metrics diff exp1-ratio-features --show-md
###Output
| Path | Metric | Old | New | Change | core[39m>
|-------------------|----------|---------|---------|----------|
| data/metrics.json | f1_score | 0.15385 | 0.96658 | 0.81274 |
[0m
###Markdown
Build Plots
###Code
from IPython.display import IFrame
###Output
_____no_output_____
###Markdown
Show
###Code
!dvc plots show --template confusion "data/cm.csv" -x actual -y predicted -o data/plots-show.html
IFrame(src='data/plots-show.html', width=800, height=500)
###Output
_____no_output_____
###Markdown
Diff
###Code
# Build metircs plots for all 3 experiments
!dvc plots diff -t confusion -o data/plots-diff.html exp1-ratio-features exp3-svm -x predicted
IFrame(src='data/plots-diff.html', width=800, height=500)
###Output
_____no_output_____ |
docs/contribute/benchmarks_latest_results/Prod3b/CTAN_Zd20_AzSouth_NSB1x_baseline_pointsource/DL3/benchmarks_DL3_IRFs_and_sensitivity.ipynb | ###Markdown
Instrument Response Functions (IRFs) and sensitivity **Author(s):** - Dr. Michele Peresano (CEA-Saclay/IRFU/DAp/LEPCHE), 2020- Alice Donini (INFN Sezione di Trieste and Universita degli Studi di Udine), 2020- Gaia Verna (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France), 2020based on [pyirf](https://github.com/cta-observatory/pyirf/blob/master/docs/notebooks/) .**Description:**This notebook contains DL3 and benchmarks for the _protopipe_ pipeline. Latest performance results cannot be shown on this public documentation and are therefore hosted at [this RedMine page](https://forge.in2p3.fr/projects/benchmarks-reference-analysis/wiki/Protopipe_performance_data) .Note that:- a more general set of benchmarks is being defined in cta-benchmarks/ctaplot,- follow [this](https://www.overleaf.com/16933164ghbhvjtchknf) document by adding new benchmarks or proposing new ones.**Requirements:**To run this notebook you will need a set of DL2 files produced on the grid with a performance script such as ``make_performance_EventDisplay.py`` .The MC production to be used and the appropriate set of files to use for this notebook can be found [here](https://forge.in2p3.fr/projects/step-by-step-reference-mars-analysis/wikiThe-MC-sample ).The DL2 data format required to run the notebook is the current one used by _protopipe_ , but it will converge to the one from _ctapipe_.**Development and testing:** As with any other part of _protopipe_ and being part of the official repository, this notebook can be further developed by any interested contributor. The execution of this notebook is not currently automatic, it must be done locally by the user - preferably _before_ pushing a pull-request. **IMPORTANT:** Please, if you wish to contribute to this notebook, before pushing anything to your branch (better even before opening the PR) clear all the output and remove any local directory paths that you used for testing (leave empty strings).**TODO:** - ... Table of contents* [Optimized cuts](Optimized-cuts) - [Direction cut](Direction-cut) - [Gamma/Hadron separation](Gamma/Hadron-separation)* [Differential sensitivity from cuts optimization](Differential-sensitivity-from-cuts-optimization)* [Sensitivity against requirements](Sensitivity-against-requirements)* [Sensitivity comparison between pipelines](Sensitivity-comparison-between-pipelines)* [IRFs](IRFs) - [Effective area](Effective-area) - [Point Spread Function](Point-Spread-Function) + [Angular resolution](Angular-resolution) - [Energy dispersion](Energy-dispersion) + [Energy resolution](Energy-resolution) - [Background rate](Background-rate) Imports[back to top](Table-of-contents)
###Code
# From the standard library
import os
from pathlib import Path
# From pyirf
import pyirf
from pyirf.binning import bin_center
from pyirf.utils import cone_solid_angle
# From other 3rd-party libraries
import numpy as np
import astropy.units as u
from astropy.io import fits
from astropy.table import QTable, Table, Column
import uproot
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter
%matplotlib inline
plt.rcParams['figure.figsize'] = (9, 6)
###Output
_____no_output_____
###Markdown
Input data[back to top](Table-of-contents)
###Code
# First we check if a _plots_ folder exists already.
# If not, we create it.
Path("./plots").mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Protopipe[back to top](Table-of-contents)
###Code
#Path to the performance folder
parent_dir = "" # path to 'analyses' folder
analysisName = ""
infile = ""
production = infile.split("protopipe_")[1].split("_Time")[0]
protopipe_file = Path(parent_dir, analysisName, "data/DL3", infile)
###Output
_____no_output_____
###Markdown
ASWG performance[back to top](Table-of-contents)
###Code
parent_dir_aswg = ""
# MARS performance (available here: https://forge.in2p3.fr/projects/step-by-step-reference-mars-analysis/wiki)
indir_CTAMARS = ""
infile_CTAMARS = "SubarrayLaPalma_4L15M_south_IFAE_50hours_20190630.root"
MARS_performance = uproot.open(Path(parent_dir_aswg, indir_CTAMARS, infile_CTAMARS))
MARS_label = "CTAMARS (2019)"
# ED performance (available here: https://forge.in2p3.fr/projects/cta_analysis-and-simulations/wiki/Prod3b_based_instrument_response_functions)
indir_ED = ""
infile_ED = "CTA-Performance-North-20deg-S-50h_20181203.root"
ED_performance = uproot.open(Path(parent_dir_aswg, indir_ED, infile_ED))
ED_label = "EventDisplay (2018)"
###Output
_____no_output_____
###Markdown
Requirements[back to top](Table-of-contents)
###Code
indir = './requirements'
site = 'North'
obs_time = '50h'
# Full array
infiles = dict(sens=f'/{site}-{obs_time}.dat')
requirements = dict()
for key in infiles.keys():
requirements[key] = Table.read(indir + infiles[key], format='ascii')
requirements['sens'].add_column(Column(data=(10**requirements['sens']['col1']), name='ENERGY'))
requirements['sens'].add_column(Column(data=requirements['sens']['col2'], name='SENSITIVITY'))
###Output
_____no_output_____
###Markdown
Optimized cuts[back to top](Table-of-contents) Direction[back to top](Table-of-contents)
###Code
# protopipe
rad_max = QTable.read(protopipe_file, hdu='RAD_MAX')[0]
plt.errorbar(
0.5 * (rad_max['ENERG_LO'] + rad_max['ENERG_HI'])[1:-1].to_value(u.TeV),
rad_max['RAD_MAX'].T[1:-1, 0].to_value(u.deg),
xerr=0.5 * (rad_max['ENERG_HI'] - rad_max['ENERG_LO'])[1:-1].to_value(u.TeV),
ls='',
label='protopipe',
color='DarkOrange'
)
# ED
theta_cut_ed, edges = ED_performance['ThetaCut;1'].to_numpy()
plt.errorbar(
bin_center(10**edges),
theta_cut_ed,
xerr=np.diff(10**edges),
ls='',
label='EventDisplay',
color='DarkGreen'
)
# MARS
theta_cut_ed = np.sqrt(MARS_performance['Theta2Cut;1'].to_numpy()[0])
edges = MARS_performance['Theta2Cut;1'].to_numpy()[1]
plt.errorbar(
bin_center(10**edges),
theta_cut_ed,
xerr=np.diff(10**edges),
ls='',
label='MARS',
color='DarkBlue'
)
plt.legend()
plt.ylabel('Direction cut [deg]')
plt.xlabel('Reconstructed energy [TeV]')
plt.xscale('log')
plt.title(production)
plt.grid()
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Gamma/Hadron separation[back to top](Table-of-contents)
###Code
# protopipe
gh_cut = QTable.read(protopipe_file, hdu='GH_CUTS')[1:-1]
plt.errorbar(
0.5 * (gh_cut['low'] + gh_cut['high']).to_value(u.TeV),
gh_cut['cut'],
xerr=0.5 * (gh_cut['high'] - gh_cut['low']).to_value(u.TeV),
ls='',
label='protopipe',
color='DarkOrange'
)
plt.legend()
plt.ylabel('gamma/hadron cut')
plt.xlabel('Reconstructed energy [TeV]')
plt.xscale('log')
plt.title(production)
plt.grid()
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Differential sensitivity from cuts optimization[back to top](Table-of-contents)
###Code
# [1:-1] removes under/overflow bins
sensitivity_protopipe = QTable.read(protopipe_file, hdu='SENSITIVITY')[1:-1]
# make it print nice
sensitivity_protopipe['reco_energy_low'].info.format = '.3g'
sensitivity_protopipe['reco_energy_high'].info.format = '.3g'
sensitivity_protopipe['reco_energy_center'].info.format = '.3g'
sensitivity_protopipe['relative_sensitivity'].info.format = '.2g'
sensitivity_protopipe['flux_sensitivity'].info.format = '.3g'
for k in filter(lambda k: k.startswith('n_'), sensitivity_protopipe.colnames):
sensitivity_protopipe[k].info.format = '.1f'
sensitivity_protopipe
###Output
_____no_output_____
###Markdown
Sensitivity against requirements[back to top](Table-of-contents)
###Code
plt.figure(figsize=(12,8))
unit = u.Unit('erg cm-2 s-1')
# protopipe
e = sensitivity_protopipe['reco_energy_center']
w = (sensitivity_protopipe['reco_energy_high'] - sensitivity_protopipe['reco_energy_low'])
s = (e**2 * sensitivity_protopipe['flux_sensitivity'])
plt.errorbar(
e.to_value(u.TeV),
s.to_value(unit),
xerr=w.to_value(u.TeV) / 2,
ls='',
label='protopipe',
color='DarkOrange'
)
# Add requirements
plt.plot(requirements['sens']['ENERGY'],
requirements['sens']['SENSITIVITY'],
color='black',
ls='--',
lw=2,
label='Requirements'
)
# Style settings
plt.title(f'Minimal Flux Satisfying Requirements for {obs_time} - {site} site')
plt.xscale("log")
plt.yscale("log")
plt.ylabel(rf"$(E^2 \cdot \mathrm{{Flux Sensitivity}}) /$ ({unit.to_string('latex')})")
plt.xlabel("Reco Energy [TeV]")
plt.grid(which="both")
plt.legend()
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Sensitivity comparison between pipelines[back to top](Table-of-contents)
###Code
plt.figure(figsize=(12,8))
fig, (ax_sens, ax_ratio) = plt.subplots(
2, 1,
gridspec_kw={'height_ratios': [4, 1]},
sharex=True,
)
unit = u.Unit('erg cm-2 s-1')
# Add requirements
ax_sens.plot(requirements['sens']['ENERGY'],
requirements['sens']['SENSITIVITY'],
color='black',
ls='--',
lw=2,
label='Requirements'
)
# protopipe
e = sensitivity_protopipe['reco_energy_center']
w = (sensitivity_protopipe['reco_energy_high'] - sensitivity_protopipe['reco_energy_low'])
s_p = (e**2 * sensitivity_protopipe['flux_sensitivity'])
ax_sens.errorbar(
e.to_value(u.TeV),
s_p.to_value(unit),
xerr=w.to_value(u.TeV) / 2,
ls='',
label='protopipe',
color='DarkOrange'
)
# ED
s_ED, edges = ED_performance["DiffSens"].to_numpy()
yerr = ED_performance["DiffSens"].errors()
bins = 10**edges
x = bin_center(bins)
width = np.diff(bins)
ax_sens.errorbar(
x,
s_ED,
xerr=width/2,
yerr=yerr,
label=ED_label,
ls='',
color='DarkGreen'
)
# MARS
s_MARS, edges = MARS_performance["DiffSens"].to_numpy()
yerr = MARS_performance["DiffSens"].errors()
bins = 10**edges
x = bin_center(bins)
width = np.diff(bins)
ax_sens.errorbar(
x,
s_MARS,
xerr=width/2,
yerr=yerr,
label=MARS_label,
ls='',
color='DarkBlue'
)
ax_ratio.errorbar(
e.to_value(u.TeV),
s_p.to_value(unit) / s_ED,
xerr=w.to_value(u.TeV)/2,
ls='',
label = "",
color='DarkGreen'
)
ax_ratio.errorbar(
e.to_value(u.TeV),
s_p.to_value(unit) / s_MARS,
xerr=w.to_value(u.TeV)/2,
ls='',
label = "",
color='DarkBlue'
)
ax_ratio.axhline(1, color = 'DarkOrange')
ax_ratio.set_yscale('log')
ax_ratio.set_xlabel("Reconstructed energy [TeV]")
ax_ratio.set_ylabel('Ratio')
ax_ratio.grid()
ax_ratio.yaxis.set_major_formatter(ScalarFormatter())
ax_ratio.set_ylim(0.5, 2.0)
ax_ratio.set_yticks([0.5, 2/3, 1, 3/2, 2])
ax_ratio.set_yticks([], minor=True)
# Style settings
ax_sens.set_title(f'Minimal Flux Satisfying Requirements for 50 hours \n {production}')
ax_sens.set_xscale("log")
ax_sens.set_yscale("log")
ax_sens.set_ylabel(rf"$E^2 \cdot \mathrm{{Flux Sensitivity}} $ [{unit.to_string('latex')}]")
ax_sens.grid(which="both")
ax_sens.legend()
fig.tight_layout(h_pad=0)
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
IRFs[back to top](Table-of-contents) Effective area[back to top](Table-of-contents)
###Code
# protopipe
# uncomment the other strings to see effective areas
# for the different cut levels. Left out here for better
# visibility of the final effective areas.
suffix =''
#'_NO_CUTS'
#'_ONLY_GH'
#'_ONLY_THETA'
area = QTable.read(protopipe_file, hdu='EFFECTIVE_AREA' + suffix)[0]
plt.errorbar(
0.5 * (area['ENERG_LO'] + area['ENERG_HI']).to_value(u.TeV)[1:-1],
area['EFFAREA'].to_value(u.m**2).T[1:-1, 0],
xerr=0.5 * (area['ENERG_LO'] - area['ENERG_HI']).to_value(u.TeV)[1:-1],
ls='',
label='protopipe ' + suffix,
color='DarkOrange'
)
# ED
y, edges = ED_performance["EffectiveAreaEtrue"].to_numpy()
yerr = ED_performance["EffectiveAreaEtrue"].errors()
x = bin_center(10**edges)
xerr = 0.5 * np.diff(10**edges)
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=ED_label,
color='DarkGreen'
)
# MARS
y, edges = MARS_performance["EffectiveAreaEtrue"].to_numpy()
yerr = MARS_performance["EffectiveAreaEtrue"].errors()
x = bin_center(10**edges)
xerr = 0.5 * np.diff(10**edges)
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=MARS_label,
color='DarkBlue'
)
# Style settings
plt.xscale("log")
plt.yscale("log")
plt.xlabel("True energy [TeV]")
plt.ylabel("Effective collection area [m²]")
plt.title(production)
plt.grid(which="both")
plt.legend()
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Point Spread Function[back to top](Table-of-contents)
###Code
psf_table = QTable.read(protopipe_file, hdu='PSF')[0]
# select the only fov offset bin
psf = psf_table['RPSF'].T[:, 0, :].to_value(1 / u.sr)
offset_bins = np.append(psf_table['RAD_LO'], psf_table['RAD_HI'][-1])
phi_bins = np.linspace(0, 2 * np.pi, 100)
# Let's make a nice 2d representation of the radially symmetric PSF
r, phi = np.meshgrid(offset_bins.to_value(u.deg), phi_bins)
# look at a single energy bin
# repeat values for each phi bin
center = 0.5 * (psf_table['ENERG_LO'] + psf_table['ENERG_HI'])
fig = plt.figure(figsize=(15, 5))
plt.suptitle(production)
axs = [fig.add_subplot(1, 3, i, projection='polar') for i in range(1, 4)]
for bin_id, ax in zip([10, 20, 30], axs):
image = np.tile(psf[bin_id], (len(phi_bins) - 1, 1))
ax.set_title(f'PSF @ {center[bin_id]:.2f} TeV')
ax.pcolormesh(phi, r, image)
ax.set_ylim(0, 0.25)
ax.set_aspect(1)
fig.tight_layout()
None # to remove clutter by mpl objects
# Profile
center = 0.5 * (offset_bins[1:] + offset_bins[:-1])
xerr = 0.5 * (offset_bins[1:] - offset_bins[:-1])
for bin_id in [10, 20, 30]:
plt.errorbar(
center.to_value(u.deg),
psf[bin_id],
xerr=xerr.to_value(u.deg),
ls='',
label=f'Energy Bin {bin_id}'
)
#plt.yscale('log')
plt.legend()
plt.xlim(0, 0.25)
plt.ylabel('PSF PDF [sr⁻¹]')
plt.xlabel('Distance from True Source [deg]')
plt.title(production)
plt.grid()
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Angular resolution[back to top](Table-of-contents) NOTE: MARS and EventDisplay Angular Resolution are plotted as a function of Reco Energy, protopipe ones as a function of True Energy
###Code
# protopipe
ang_res = QTable.read(protopipe_file, hdu='ANGULAR_RESOLUTION')[1:-1]
plt.errorbar(
0.5 * (ang_res['reco_energy_low'] + ang_res['reco_energy_high']).to_value(u.TeV),
ang_res['angular_resolution'].to_value(u.deg),
xerr=0.5 * (ang_res['reco_energy_high'] - ang_res['reco_energy_low']).to_value(u.TeV),
ls='',
label='protopipe',
color='DarkOrange'
)
# ED
y, edges = ED_performance["AngRes"].to_numpy()
yerr = ED_performance["AngRes"].errors()
x = bin_center(10**edges)
xerr = 0.5 * np.diff(10**edges)
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=ED_label,
color='DarkGreen')
# MARS
y, edges = MARS_performance["AngRes"].to_numpy()
yerr = MARS_performance["AngRes"].errors()
x = bin_center(10**edges)
xerr = 0.5 * np.diff(10**edges)
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=MARS_label,
color='DarkBlue')
# Style settings
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Reconstructed energy [TeV]")
plt.ylabel("Angular Resolution [deg]")
plt.title(production)
plt.grid(which="both")
plt.legend(loc="best")
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Energy dispersion[back to top](Table-of-contents)
###Code
from matplotlib.colors import LogNorm
edisp = QTable.read(protopipe_file, hdu='ENERGY_DISPERSION')[0]
e_bins = edisp['ENERG_LO'][1:]
migra_bins = edisp['MIGRA_LO'][1:]
plt.title(production)
plt.pcolormesh(e_bins.to_value(u.TeV),
migra_bins,
edisp['MATRIX'].T[1:-1, 1:-1, 0].T,
cmap='inferno',
norm=LogNorm())
plt.xscale('log')
plt.yscale('log')
plt.grid()
plt.colorbar(label='PDF Value')
plt.xlabel("True energy [TeV]")
plt.ylabel("Reconstructed energy / True energy")
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Energy resolution[back to top](Table-of-contents)
###Code
# protopipe
bias_resolution = QTable.read(protopipe_file, hdu='ENERGY_BIAS_RESOLUTION')[1:-1]
plt.errorbar(
0.5 * (bias_resolution['reco_energy_low'] + bias_resolution['reco_energy_high']).to_value(u.TeV),
bias_resolution['resolution'],
xerr=0.5 * (bias_resolution['reco_energy_high'] - bias_resolution['reco_energy_low']).to_value(u.TeV),
ls='',
label='protopipe',
color='DarkOrange'
)
plt.xscale('log')
# ED
y, edges = ED_performance["ERes"].to_numpy()
yerr = ED_performance["ERes"].errors()
x = bin_center(10**edges)
xerr = np.diff(10**edges) / 2
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=ED_label,
color='DarkGreen'
)
# MARS
y, edges = MARS_performance["ERes"].to_numpy()
yerr = MARS_performance["ERes"].errors()
x = bin_center(10**edges)
xerr = np.diff(10**edges) / 2
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=MARS_label,
color='DarkBlue'
)
# Style settings
plt.xlabel("Reconstructed energy [TeV]")
plt.ylabel("Energy resolution")
plt.grid(which="both")
plt.legend(loc="best")
plt.title(production)
None # to remove clutter by mpl objects
###Output
_____no_output_____
###Markdown
Background rate[back to top](Table-of-contents)
###Code
from pyirf.utils import cone_solid_angle
# protopipe
bg_rate = QTable.read(protopipe_file, hdu='BACKGROUND')[0]
reco_bins = np.append(bg_rate['ENERG_LO'], bg_rate['ENERG_HI'][-1])
# first fov bin, [0, 1] deg
fov_bin = 0
rate_bin = bg_rate['BKG'].T[:, fov_bin]
# interpolate theta cut for given e reco bin
e_center_bg = 0.5 * (bg_rate['ENERG_LO'] + bg_rate['ENERG_HI'])
e_center_theta = 0.5 * (rad_max['ENERG_LO'] + rad_max['ENERG_HI'])
theta_cut = np.interp(e_center_bg, e_center_theta, rad_max['RAD_MAX'].T[:, 0])
# undo normalization
rate_bin *= cone_solid_angle(theta_cut)
rate_bin *= np.diff(reco_bins)
plt.errorbar(
0.5 * (bg_rate['ENERG_LO'] + bg_rate['ENERG_HI']).to_value(u.TeV)[1:-1],
rate_bin.to_value(1 / u.s)[1:-1],
xerr=np.diff(reco_bins).to_value(u.TeV)[1:-1] / 2,
ls='',
label='protopipe',
color='DarkOrange'
)
# ED
y, edges = ED_performance["BGRate"].to_numpy()
yerr = ED_performance["BGRate"].errors()
x = bin_center(10**edges)
xerr = np.diff(10**edges) / 2
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=ED_label,
color="DarkGreen")
# MARS
y, edges = MARS_performance["BGRate"].to_numpy()
yerr = MARS_performance["BGRate"].errors()
x = bin_center(10**edges)
xerr = np.diff(10**edges) / 2
plt.errorbar(x,
y,
xerr=xerr,
yerr=yerr,
ls='',
label=MARS_label,
color="DarkBlue")
# Style settings
plt.xscale("log")
plt.xlabel("Reconstructed energy [TeV]")
plt.ylabel("Background rate / (s⁻¹ TeV⁻¹) ")
plt.grid(which="both")
plt.legend(loc="best")
plt.title(production)
plt.yscale('log')
None # to remove clutter by mpl objects
###Output
_____no_output_____ |
IMDB_Dataset.ipynb | ###Markdown
###Code
# NOTE: PLEASE MAKE SURE YOU ARE RUNNING THIS IN A PYTHON3 ENVIRONMENT
import tensorflow as tf
print(tf.__version__)
# This is needed for the iterator over the data
# But not necessary if you have TF 2.0 installed
#!pip install tensorflow==2.0.0-beta0
tf.enable_eager_execution()
# !pip install -q tensorflow-datasets
import tensorflow_datasets as tfds
imdb, info = tfds.load("imdb_reviews", with_info=True, as_supervised=True)
import numpy as np
train_data, test_data = imdb['train'], imdb['test']
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = []
# str(s.tonumpy()) is needed in Python3 instead of just s.numpy()
for s,l in train_data:
training_sentences.append(str(s.numpy()))
training_labels.append(l.numpy())
for s,l in test_data:
testing_sentences.append(str(s.numpy()))
testing_labels.append(l.numpy())
training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)
testing_labels
vocab_size = 10000
embedding_dim = 16
max_length = 120
trunc_type='post'
oov_tok = "<OOV>"
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(training_sentences)
padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length)
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_review(padded[1]))
print(training_sentences[1])
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
#tf.keras.layers.Flatten(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 10
model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
sentence = "I really think this is amazing. honest."
sequence = tokenizer.texts_to_sequences(sentence)
print(sequence)
###Output
_____no_output_____
###Markdown
New Section
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____ |
DataAnalysisIntroductionLab1.ipynb | ###Markdown
Data Analysis with Python IntroductionWelcome!In this section, you will learn how to approach data acquisition in various ways, and obtain necessary insights from a dataset. By the end of this lab, you will successfully load the data into Jupyter Notebook, and gain some fundamental insights via Pandas Library. Table of Contents Data Acquisition Basic Insight of DatasetEstimated Time Needed: 10 min Data AcquisitionThere are various formats for a dataset, .csv, .json, .xlsx etc. The dataset can be stored in different places, on your local machine or sometimes online.In this section, you will learn how to load a dataset into our Jupyter Notebook.In our case, the Automobile Dataset is an online source, and it is in CSV (comma separated value) format. Let's use this dataset as an example to practice data reading. data source: https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data data type: csvThe Pandas Library is a useful tool that enables us to read various datasets into a data frame; our Jupyter notebook platforms have a built-in Pandas Library so that all we need to do is import Pandas without installing.
###Code
# import pandas library
import pandas as pd
###Output
_____no_output_____
###Markdown
Read DataWe use pandas.read_csv() function to read the csv file. In the bracket, we put the file path along with a quotation mark, so that pandas will read the file into a data frame from that address. The file path can be either an URL or your local file address.Because the data does not include headers, we can add an argument headers = None inside the read_csv() method, so that pandas will not automatically set the first row as a header.You can also assign the dataset to any variable you create. This dataset was hosted on IBM Cloud object click HERE for free storage.
###Code
# Import pandas library
import pandas as pd
# Read the online file by the URL provides above, and assign it to variable "df"
other_path = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
df = pd.read_csv(other_path, header=None)
###Output
_____no_output_____
###Markdown
After reading the dataset, we can use the dataframe.head(n) method to check the top n rows of the dataframe; where n is an integer. Contrary to dataframe.head(n), dataframe.tail(n) will show you the bottom n rows of the dataframe.
###Code
# show the first 5 rows using dataframe.head() method
print("The first 5 rows of the dataframe")
df.head(5)
###Output
The first 5 rows of the dataframe
###Markdown
Question 1: check the bottom 10 rows of data frame "df".
###Code
# Write your code below and press Shift+Enter to execute
print("The last 10 rows of dataframe \n")
df.tail(10)
###Output
The last 10 rows of dataframe
###Markdown
Question 1 Answer: Run the code below for the solution! Double-click here for the solution.<!-- The answer is below:print("The last 10 rows of the dataframe\n")df.tail(10)--> Add HeadersTake a look at our dataset; pandas automatically set the header by an integer from 0.To better describe our data we can introduce a header, this information is available at: https://archive.ics.uci.edu/ml/datasets/AutomobileThus, we have to add headers manually.Firstly, we create a list "headers" that include all column names in order.Then, we use dataframe.columns = headers to replace the headers by the list we created.
###Code
# create headers list
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
print("headers\n", headers)
###Output
headers
['symboling', 'normalized-losses', 'make', 'fuel-type', 'aspiration', 'num-of-doors', 'body-style', 'drive-wheels', 'engine-location', 'wheel-base', 'length', 'width', 'height', 'curb-weight', 'engine-type', 'num-of-cylinders', 'engine-size', 'fuel-system', 'bore', 'stroke', 'compression-ratio', 'horsepower', 'peak-rpm', 'city-mpg', 'highway-mpg', 'price']
###Markdown
We replace headers and recheck our data frame
###Code
df.columns = headers
df.head(10)
###Output
_____no_output_____
###Markdown
we can drop missing values along the column "price" as follows
###Code
df.dropna(subset=["price"], axis=0)
###Output
_____no_output_____
###Markdown
Now, we have successfully read the raw dataset and add the correct headers into the data frame. Question 2: Find the name of the columns of the dataframe
###Code
# Write your code below and press Shift+Enter to execute
print(df.columns)
###Output
Index(['symboling', 'normalized-losses', 'make', 'fuel-type', 'aspiration',
'num-of-doors', 'body-style', 'drive-wheels', 'engine-location',
'wheel-base', 'length', 'width', 'height', 'curb-weight', 'engine-type',
'num-of-cylinders', 'engine-size', 'fuel-system', 'bore', 'stroke',
'compression-ratio', 'horsepower', 'peak-rpm', 'city-mpg',
'highway-mpg', 'price'],
dtype='object')
###Markdown
Double-click here for the solution.<!-- The answer is below:print(df.columns)--> Save DatasetCorrespondingly, Pandas enables us to save the dataset to csv by using the dataframe.to_csv() method, you can add the file path and name along with quotation marks in the brackets. For example, if you would save the dataframe df as automobile.csv to your local machine, you may use the syntax below:
###Code
df.to_csv("automobile.csv", index=False)
###Output
_____no_output_____
###Markdown
We can also read and save other file formats, we can use similar functions to **`pd.read_csv()`** and **`df.to_csv()`** for other data formats, the functions are listed in the following table: Read/Save Other Data Formats| Data Formate | Read | Save || ------------- |:--------------:| ----------------:|| csv | `pd.read_csv()` |`df.to_csv()` || json | `pd.read_json()` |`df.to_json()` || excel | `pd.read_excel()`|`df.to_excel()` || hdf | `pd.read_hdf()` |`df.to_hdf()` || sql | `pd.read_sql()` |`df.to_sql()` || ... | ... | ... | Basic Insight of DatasetAfter reading data into Pandas dataframe, it is time for us to explore the dataset.There are several ways to obtain essential insights of the data to help us better understand our dataset. Data TypesData has a variety of types.The main types stored in Pandas dataframes are object, float, int, bool and datetime64. In order to better learn about each attribute, it is always good for us to know the data type of each column. In Pandas:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
returns a Series with the data type of each column.
###Code
# check the data type of data frame "df" by .dtypes
print(df.dtypes)
###Output
symboling int64
normalized-losses object
make object
fuel-type object
aspiration object
num-of-doors object
body-style object
drive-wheels object
engine-location object
wheel-base float64
length float64
width float64
height float64
curb-weight int64
engine-type object
num-of-cylinders object
engine-size int64
fuel-system object
bore object
stroke object
compression-ratio float64
horsepower object
peak-rpm object
city-mpg int64
highway-mpg int64
price object
dtype: object
###Markdown
As a result, as shown above, it is clear to see that the data type of "symboling" and "curb-weight" are int64, "normalized-losses" is object, and "wheel-base" is float64, etc.These data types can be changed; we will learn how to accomplish this in a later module. DescribeIf we would like to get a statistical summary of each column, such as count, column mean value, column standard deviation, etc. We use the describe method:
###Code
dataframe.describe()
###Output
_____no_output_____
###Markdown
This method will provide various summary statistics, excluding NaN (Not a Number) values.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
This shows the statistical summary of all numeric-typed (int, float) columns.For example, the attribute "symboling" has 205 counts, the mean value of this column is 0.83, the standard deviation is 1.25, the minimum value is -2, 25th percentile is 0, 50th percentile is 1, 75th percentile is 2, and the maximum value is 3.However, what if we would also like to check all the columns including those that are of type object.You can add an argument include = "all" inside the bracket. Let's try it again.
###Code
# describe all the columns in "df"
df.describe(include = "all")
###Output
_____no_output_____
###Markdown
Now, it provides the statistical summary of all the columns, including object-typed attributes.We can now see how many unique values, which is the top value and the frequency of top value in the object-typed columns.Some values in the table above show as "NaN", this is because those numbers are not available regarding a particular column type. Question 3: You can select the columns of a data frame by indicating the name of each column, for example, you can select the three columns as follows: dataframe[[' column 1 ',column 2', 'column 3']]Where "column" is the name of the column, you can apply the method ".describe()" to get the statistics of those columns as follows: dataframe[[' column 1 ',column 2', 'column 3'] ].describe()Apply the method to ".describe()" to the columns 'length' and 'compression-ratio'.
###Code
# Write your code below and press Shift+Enter to execute
df[['length', 'compression-ratio']].describe()
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- The answer is below:df[['length', 'compression-ratio']].describe()--> InfoAnother method you can use to check your dataset is:
###Code
dataframe.info
###Output
_____no_output_____
###Markdown
It provide a concise summary of your DataFrame.
###Code
# look at the info of "df"
df.info
###Output
_____no_output_____ |
Practical Statistics/Statistics/Confidence Interval/Confidence Intervals - Part I.ipynb | ###Markdown
Confidence Intervals - Part IFirst let's read in the necessary libraries and the dataset. You also have the full and reduced versions of the data available. The reduced version is an example of you would actually get in practice, as it is the sample. While the full data is an example of everyone in your population.
###Code
import pandas as pd
import numpy as np
np.random.seed(42)
coffee_full = pd.read_csv('coffee_dataset.csv')
coffee_red = coffee_full.sample(200) #this is the only data you might actually get in the real world.
coffee_red.head()
###Output
_____no_output_____
###Markdown
`1.` What is the proportion of coffee drinkers in the sample? What is the proportion of individuals that don't drink coffee?
###Code
do = coffee_red[coffee_red.drinks_coffee == True]['drinks_coffee'].sum()
dont = coffee_red[coffee_red.drinks_coffee == False]['drinks_coffee'].count()
print(do/coffee_red.shape[0], dont/coffee_red.shape[0])
###Output
0.595 0.405
###Markdown
`2.` Of the individuals who drink coffee, what is the average height? Of the individuals who do not drink coffee, what is the average height?
###Code
do = coffee_red[coffee_red.drinks_coffee == True]['height'].mean()
dont = coffee_red[coffee_red.drinks_coffee == False]['height'].mean()
print(do, dont)
coffee_full[coffee_full.drinks_coffee == True]['drinks_coffee'].sum() / coffee_full.shape[0]
###Output
_____no_output_____
###Markdown
`3.` Simulate 200 "new" individuals from your original sample of 200. What are the proportion of coffee drinkers in your bootstrap sample? How about individuals that don't drink coffee?
###Code
sample_200 = coffee_red.sample(200, replace=True)
do = sample_200[sample_200.drinks_coffee == True]['drinks_coffee'].sum()
dont = sample_200[sample_200.drinks_coffee == False]['drinks_coffee'].count()
print(do/sample_200.shape[0], dont/sample_200.shape[0])
###Output
0.65 0.35
###Markdown
`4.` Now simulate your bootstrap sample 10,000 times and take the mean height of the non-coffee drinkers in each sample. Each bootstrap sample should be from the very first sample of 200 data points. Plot the distribution, and pull the values necessary for a 95% confidence interval. What do you notice about the sampling distribution of the mean in this example?
###Code
dos = []
donts = []
for _ in range(0,10000):
sample_200 = coffee_red.sample(200, replace=True)
do = sample_200[sample_200.drinks_coffee == True]['height'].mean()
dont = sample_200[sample_200.drinks_coffee == False]['height'].mean()
dos.append(do)
donts.append(dont)
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(dos)
plt.hist(donts)
plt.legend(['Do','Don`t']);
###Output
_____no_output_____
###Markdown
Confidence Interval (95%)
###Code
np.percentile(dos, 2.5), np.percentile(dos, 97.5)
np.percentile(donts, 2.5), np.percentile(donts, 97.5)
###Output
_____no_output_____
###Markdown
`5.` Did your interval capture the actual average height of non-coffee drinkers in the population? Look at the average in the population and the two bounds provided by your 95% confidence interval, and then answer the final quiz question below.
###Code
do_f = coffee_full[coffee_full.drinks_coffee == True]['height'].mean()
dont_f = coffee_full[coffee_full.drinks_coffee == False]['height'].mean()
print(do_f, dont_f)
###Output
68.4002102555 66.4434077621
|
legacy_tutorials/aqua/optimization/max_cut_and_tsp.ipynb | ###Markdown
 _*Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver*_ The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAntonio Mezzacapo[1], Jay Gambetta[1], Kristan Temme[1], Ramis Movassagh[1], Albert Frisch[1], Takashi Imamichi[1], Giacomo Nannicni[1], Richard Chen[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionMany problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lie at the core of complex decision-making and definition of strategies. Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function. **Typical optimization problems**Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objectsMaximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects We consider here max-cut problems of practical interest in many fields, and show how they can be nmapped on quantum computers. Weighted Max-CutMax-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.The formal definition of this problem is the following:Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)$$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes $$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$ In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that $$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian $$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$Aqua can generate the Ising Hamiltonian for the first profit function $\tilde{C}$. Approximate Universal Quantum Computing for Optimization ProblemsThere has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature. The Algorithm works as follows:1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$. 4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen. 5. Use a classical optimizer to choose a new set of controls.6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer. It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with. References:- A. Lucas, Frontiers in Physics 2, 5 (2014)- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017)
###Code
# useful additional packages
import matplotlib.pyplot as plt
import matplotlib.axes as axes
%matplotlib inline
import numpy as np
import networkx as nx
from qiskit import BasicAer
from qiskit.tools.visualization import plot_histogram
from qiskit.circuit.library import TwoLocal
from qiskit.optimization.applications.ising import max_cut, tsp
from qiskit.aqua.algorithms import VQE, NumPyMinimumEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua import QuantumInstance
from qiskit.optimization.applications.ising.common import sample_most_likely
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiment on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_account('MY_API_TOKEN')` to store it first.
###Code
from qiskit import IBMQ
# provider = IBMQ.load_account()
###Output
_____no_output_____
###Markdown
Max-Cut problem
###Code
# Generating a graph of 4 nodes
n=4 # Number of nodes in graph
G=nx.Graph()
G.add_nodes_from(np.arange(0,n,1))
elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)]
# tuple is (i,j,weight) where (i,j) is the edge
G.add_weighted_edges_from(elist)
colors = ['r' for node in G.nodes()]
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
# Computing the weight matrix from the random graph
w = np.zeros([n,n])
for i in range(n):
for j in range(n):
temp = G.get_edge_data(i,j,default=0)
if temp != 0:
w[i,j] = temp['weight']
print(w)
###Output
[[0. 1. 1. 1.]
[1. 0. 1. 0.]
[1. 1. 0. 1.]
[1. 0. 1. 0.]]
###Markdown
Brute force approachTry all possible $2^n$ combinations. For $n = 4$, as in this example, one deals with only 16 combinations, but for n = 1000, one has 1.071509e+30 combinations, which is impractical to deal with by using a brute force approach.
###Code
best_cost_brute = 0
for b in range(2**n):
x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
cost = 0
for i in range(n):
for j in range(n):
cost = cost + w[i,j]*x[i]*(1-x[j])
if best_cost_brute < cost:
best_cost_brute = cost
xbest_brute = x
print('case = ' + str(x)+ ' cost = ' + str(cost))
colors = ['r' if xbest_brute[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, pos=pos)
print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute))
###Output
case = [0, 0, 0, 0] cost = 0.0
case = [1, 0, 0, 0] cost = 3.0
case = [0, 1, 0, 0] cost = 2.0
case = [1, 1, 0, 0] cost = 3.0
case = [0, 0, 1, 0] cost = 3.0
case = [1, 0, 1, 0] cost = 4.0
case = [0, 1, 1, 0] cost = 3.0
case = [1, 1, 1, 0] cost = 2.0
case = [0, 0, 0, 1] cost = 2.0
case = [1, 0, 0, 1] cost = 3.0
case = [0, 1, 0, 1] cost = 4.0
case = [1, 1, 0, 1] cost = 3.0
case = [0, 0, 1, 1] cost = 3.0
case = [1, 0, 1, 1] cost = 2.0
case = [0, 1, 1, 1] cost = 3.0
case = [1, 1, 1, 1] cost = 0.0
Best solution = [1, 0, 1, 0] cost = 4.0
###Markdown
Mapping to the Ising problem
###Code
qubitOp, offset = max_cut.get_operator(w)
###Output
_____no_output_____
###Markdown
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of Max-Cut. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of Max-Cut. An example of using ```docplex.get_qubitops``` is as below.
###Code
from docplex.mp.model import Model
from qiskit.optimization.applications.ising import docplex
# Create an instance of a model and variables.
mdl = Model(name='max_cut')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(n)}
# Object function
max_cut_func = mdl.sum(w[i,j]* x[i] * ( 1 - x[j] ) for i in range(n) for j in range(n))
mdl.maximize(max_cut_func)
# No constraints for Max-Cut problems.
qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = NumPyMinimumEigensolver(qubitOp)
result = ee.run()
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('max-cut objective:', result.eigenvalue.real + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.5
max-cut objective: -4.0
solution: [0 1 0 1]
solution objective: 4.0
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
print('max-cut objective:', result.eigenvalue.real + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
# run quantum algorithm with shots
seed = 10598
spsa = SPSA(max_trials=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
print('max-cut objective:', result.eigenvalue.real + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
plot_histogram(result.eigenstate)
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.4892578125
time: 9.478685140609741
max-cut objective: -3.9892578125
solution: [0 1 0 1]
solution objective: 4.0
###Markdown
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = NumPyMinimumEigensolver(qubitOp_docplex)
result = ee.run()
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('max-cut objective:', result.eigenvalue.real + offset_docplex)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.5
max-cut objective: -4.0
solution: [0 1 0 1]
solution objective: 4.0
###Markdown
Traveling Salesman ProblemIn addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time. The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice. The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)$$\sum_{i} x_{i,p} = 1 ~~\forall p$$$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is $$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$ where it is assumed the boundary condition of the Hamiltonian cycles $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$Putting this all together in a single objective function to be minimized, we get the following:$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$.Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian.
###Code
# Generating a graph of 3 nodes
n = 3
num_qubits = n ** 2
ins = tsp.random_tsp(n)
G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
colors = ['r' for node in G.nodes()]
pos = {k: v for k, v in enumerate(ins.coord)}
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
print('distance\n', ins.w)
###Output
distance
[[ 0. 48. 18.]
[48. 0. 57.]
[18. 57. 0.]]
###Markdown
Brute force approach
###Code
from itertools import permutations
def brute_force_tsp(w, N):
a=list(permutations(range(1,N)))
last_best_distance = 1e10
for i in a:
distance = 0
pre_j = 0
for j in i:
distance = distance + w[j,pre_j]
pre_j = j
distance = distance + w[pre_j,0]
order = (0,) + i
if distance < last_best_distance:
best_order = order
last_best_distance = distance
print('order = ' + str(order) + ' Distance = ' + str(distance))
return last_best_distance, best_order
best_distance, best_order = brute_force_tsp(ins.w, ins.dim)
print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance))
def draw_tsp_solution(G, order, colors, pos):
G2 = G.copy()
n = len(order)
for i in range(n):
j = (i + 1) % n
G2.add_edge(order[i], order[j])
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
draw_tsp_solution(G, best_order, colors, pos)
###Output
order = (0, 1, 2) Distance = 123.0
Best order from brute force = (0, 1, 2) with total distance = 123.0
###Markdown
Mapping to the Ising problem
###Code
qubitOp, offset = tsp.get_operator(ins)
###Output
_____no_output_____
###Markdown
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of TSP. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of TSP. An example of using ```docplex.get_qubitops``` is as below.
###Code
# Create an instance of a model and variables
mdl = Model(name='tsp')
x = {(i,p): mdl.binary_var(name='x_{0}_{1}'.format(i,p)) for i in range(n) for p in range(n)}
# Object function
tsp_func = mdl.sum(ins.w[i,j] * x[(i,p)] * x[(j,(p+1)%n)] for i in range(n) for j in range(n) for p in range(n))
mdl.minimize(tsp_func)
# Constrains
for i in range(n):
mdl.add_constraint(mdl.sum(x[(i,p)] for p in range(n)) == 1)
for p in range(n):
mdl.add_constraint(mdl.sum(x[(i,p)] for i in range(n)) == 1)
qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = NumPyMinimumEigensolver(qubitOp)
result = ee.run()
print('energy:', result.eigenvalue.real)
print('tsp objective:', result.eigenvalue.real + offset)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
###Output
energy: -600061.5
tsp objective: 123.0
feasible: True
solution: [0, 1, 2]
solution objective: 123.0
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
#print('tsp objective:', result.eigenvalue.real + offset)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
# run quantum algorithm with shots
seed = 10598
spsa = SPSA(max_trials=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
#print('tsp objective:', result.eigenvalue.real + offset)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
plot_histogram(result.eigenstate)
draw_tsp_solution(G, z, colors, pos)
###Output
energy: -530718.0126953125
time: 134.3893370628357
feasible: True
solution: [1, 2, 0]
solution objective: 123.0
###Markdown
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
###Code
ee = NumPyMinimumEigensolver(qubitOp_docplex)
result = ee.run()
print('energy:', result.eigenvalue.real)
print('tsp objective:', result.eigenvalue.real + offset_docplex)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
 _*Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver*_ The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAntonio Mezzacapo[1], Jay Gambetta[1], Kristan Temme[1], Ramis Movassagh[1], Albert Frisch[1], Takashi Imamichi[1], Giacomo Nannicni[1], Richard Chen[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionMany problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lie at the core of complex decision-making and definition of strategies. Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function. **Typical optimization problems**Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objectsMaximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects We consider here max-cut problems of practical interest in many fields, and show how they can be nmapped on quantum computers. Weighted Max-CutMax-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.The formal definition of this problem is the following:Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)$$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes $$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$ In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that $$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian $$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$Aqua can generate the Ising Hamiltonian for the first profit function $\tilde{C}$. Approximate Universal Quantum Computing for Optimization ProblemsThere has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature. The Algorithm works as follows:1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$. 4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen. 5. Use a classical optimizer to choose a new set of controls.6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer. It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with. References:- A. Lucas, Frontiers in Physics 2, 5 (2014)- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017)
###Code
# useful additional packages
import matplotlib.pyplot as plt
import matplotlib.axes as axes
%matplotlib inline
import numpy as np
import networkx as nx
from qiskit import BasicAer
from qiskit.tools.visualization import plot_histogram
from qiskit.optimization.ising import max_cut, tsp
from qiskit.aqua.algorithms import VQE, ExactEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua.components.variational_forms import RY
from qiskit.aqua import QuantumInstance
from qiskit.optimization.ising.common import sample_most_likely
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
###Output
_____no_output_____
###Markdown
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiment on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_account('MY_API_TOKEN')` to store it first.
###Code
from qiskit import IBMQ
# provider = IBMQ.load_account()
###Output
_____no_output_____
###Markdown
Max-Cut problem
###Code
# Generating a graph of 4 nodes
n=4 # Number of nodes in graph
G=nx.Graph()
G.add_nodes_from(np.arange(0,n,1))
elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)]
# tuple is (i,j,weight) where (i,j) is the edge
G.add_weighted_edges_from(elist)
colors = ['r' for node in G.nodes()]
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
# Computing the weight matrix from the random graph
w = np.zeros([n,n])
for i in range(n):
for j in range(n):
temp = G.get_edge_data(i,j,default=0)
if temp != 0:
w[i,j] = temp['weight']
print(w)
###Output
[[0. 1. 1. 1.]
[1. 0. 1. 0.]
[1. 1. 0. 1.]
[1. 0. 1. 0.]]
###Markdown
Brute force approachTry all possible $2^n$ combinations. For $n = 4$, as in this example, one deals with only 16 combinations, but for n = 1000, one has 1.071509e+30 combinations, which is impractical to deal with by using a brute force approach.
###Code
best_cost_brute = 0
for b in range(2**n):
x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
cost = 0
for i in range(n):
for j in range(n):
cost = cost + w[i,j]*x[i]*(1-x[j])
if best_cost_brute < cost:
best_cost_brute = cost
xbest_brute = x
print('case = ' + str(x)+ ' cost = ' + str(cost))
colors = ['r' if xbest_brute[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, pos=pos)
print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute))
###Output
case = [0, 0, 0, 0] cost = 0.0
case = [1, 0, 0, 0] cost = 3.0
case = [0, 1, 0, 0] cost = 2.0
case = [1, 1, 0, 0] cost = 3.0
case = [0, 0, 1, 0] cost = 3.0
case = [1, 0, 1, 0] cost = 4.0
case = [0, 1, 1, 0] cost = 3.0
case = [1, 1, 1, 0] cost = 2.0
case = [0, 0, 0, 1] cost = 2.0
case = [1, 0, 0, 1] cost = 3.0
case = [0, 1, 0, 1] cost = 4.0
case = [1, 1, 0, 1] cost = 3.0
case = [0, 0, 1, 1] cost = 3.0
case = [1, 0, 1, 1] cost = 2.0
case = [0, 1, 1, 1] cost = 3.0
case = [1, 1, 1, 1] cost = 0.0
Best solution = [1, 0, 1, 0] cost = 4.0
###Markdown
Mapping to the Ising problem
###Code
qubitOp, offset = max_cut.get_operator(w)
###Output
_____no_output_____
###Markdown
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of Max-Cut. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of Max-Cut. An example of using ```docplex.get_qubitops``` is as below.
###Code
from docplex.mp.model import Model
from qiskit.optimization.ising import docplex
# Create an instance of a model and variables.
mdl = Model(name='max_cut')
x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(n)}
# Object function
max_cut_func = mdl.sum(w[i,j]* x[i] * ( 1 - x[j] ) for i in range(n) for j in range(n))
mdl.maximize(max_cut_func)
# No constraints for Max-Cut problems.
qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
x = sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('max-cut objective:', result['energy'] + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.5
max-cut objective: -4.0
solution: [0. 1. 0. 1.]
solution objective: 4.0
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
x = sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('max-cut objective:', result['energy'] + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
# run quantum algorithm with shots
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
x = sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('time:', result['eval_time'])
print('max-cut objective:', result['energy'] + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
plot_histogram(result['eigvecs'][0])
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.5
time: 11.74726128578186
max-cut objective: -4.0
solution: [0 1 0 1]
solution objective: 4.0
###Markdown
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = ExactEigensolver(qubitOp_docplex, k=1)
result = ee.run()
x = sample_most_likely(result['eigvecs'][0])
print('energy:', result['energy'])
print('max-cut objective:', result['energy'] + offset_docplex)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)]
nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
###Output
energy: -1.5
max-cut objective: -4.0
solution: [0. 1. 0. 1.]
solution objective: 4.0
###Markdown
Traveling Salesman ProblemIn addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time. The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice. The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)$$\sum_{i} x_{i,p} = 1 ~~\forall p$$$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is $$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$ where it is assumed the boundary condition of the Hamiltonian cycles $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$Putting this all together in a single objective function to be minimized, we get the following:$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$.Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian.
###Code
# Generating a graph of 3 nodes
n = 3
num_qubits = n ** 2
ins = tsp.random_tsp(n)
G = nx.Graph()
G.add_nodes_from(np.arange(0, n, 1))
colors = ['r' for node in G.nodes()]
pos = {k: v for k, v in enumerate(ins.coord)}
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
print('distance\n', ins.w)
###Output
distance
[[ 0. 25. 19.]
[25. 0. 27.]
[19. 27. 0.]]
###Markdown
Brute force approach
###Code
from itertools import permutations
def brute_force_tsp(w, N):
a=list(permutations(range(1,N)))
last_best_distance = 1e10
for i in a:
distance = 0
pre_j = 0
for j in i:
distance = distance + w[j,pre_j]
pre_j = j
distance = distance + w[pre_j,0]
order = (0,) + i
if distance < last_best_distance:
best_order = order
last_best_distance = distance
print('order = ' + str(order) + ' Distance = ' + str(distance))
return last_best_distance, best_order
best_distance, best_order = brute_force_tsp(ins.w, ins.dim)
print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance))
def draw_tsp_solution(G, order, colors, pos):
G2 = G.copy()
n = len(order)
for i in range(n):
j = (i + 1) % n
G2.add_edge(order[i], order[j])
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
draw_tsp_solution(G, best_order, colors, pos)
###Output
order = (0, 1, 2) Distance = 71.0
Best order from brute force = (0, 1, 2) with total distance = 71.0
###Markdown
Mapping to the Ising problem
###Code
qubitOp, offset = tsp.get_operator(ins)
###Output
_____no_output_____
###Markdown
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of TSP. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of TSP. An example of using ```docplex.get_qubitops``` is as below.
###Code
# Create an instance of a model and variables
mdl = Model(name='tsp')
x = {(i,p): mdl.binary_var(name='x_{0}_{1}'.format(i,p)) for i in range(n) for p in range(n)}
# Object function
tsp_func = mdl.sum(ins.w[i,j] * x[(i,p)] * x[(j,(p+1)%n)] for i in range(n) for j in range(n) for p in range(n))
mdl.minimize(tsp_func)
# Constrains
for i in range(n):
mdl.add_constraint(mdl.sum(x[(i,p)] for p in range(n)) == 1)
for p in range(n):
mdl.add_constraint(mdl.sum(x[(i,p)] for i in range(n)) == 1)
qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
###Output
_____no_output_____
###Markdown
Checking that the full Hamiltonian gives the right cost
###Code
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = ExactEigensolver(qubitOp, k=1)
result = ee.run()
print('energy:', result['energy'])
print('tsp objective:', result['energy'] + offset)
x = sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
###Output
energy: -600035.5
tsp objective: 71.0
feasible: True
solution: [0, 1, 2]
solution objective: 71.0
###Markdown
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
###Code
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
# run quantum algorithm with shots
seed = 10598
spsa = SPSA(max_trials=300)
ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa)
backend = BasicAer.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed)
result = vqe.run(quantum_instance)
print('energy:', result['energy'])
print('time:', result['eval_time'])
#print('tsp objective:', result['energy'] + offset)
x = sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
plot_histogram(result['eigvecs'][0])
draw_tsp_solution(G, z, colors, pos)
###Output
_____no_output_____
###Markdown
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
###Code
ee = ExactEigensolver(qubitOp_docplex, k=1)
result = ee.run()
print('energy:', result['energy'])
print('tsp objective:', result['energy'] + offset_docplex)
x = sample_most_likely(result['eigvecs'][0])
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
notebooks/Function approximation.ipynb | ###Markdown
Function approximationIn the former we only looked at tabular methods. Now we will look at approximate solutions. The problem with large state spaces is not just the memory needed for large tables, but the time and data needed to fill them accurately. In many of our target tasks, almost every state encountered will never have been seen before. To make sensible decisions in such states it is necessary to generalize from previous encounters with different states that are in some sense similar to the current one. In other words, the key issue is that of generalization. Function approximation is an instance of supervised learning, the primary topic studied in machine learning, artificial neural networks, pattern recognition, and statistical curve fitting. In theory, any of the methods studied in these fields can be used in the role of function approximator within reinforcement learning algorithms, although in practice some fit more easily into this role than others. In reinforcement learning, however, it is important that learning be able to occur online, while the agent interacts with its environment or with a model of its environment. To do this requires methods that are able to learn efficiently from incrementally acquired data. In addition, reinforcement learning generally requires function approximation methods able to handle nonstationary target functions The novelty in this chapter is that the approximate function is represented not as a table but as a parameterized functional form with weight vector $w \in R^{d}$. We will write $v(s,w) \approx v_{\pi}(s)$ for the approximate value of state s given weight vector w. Typically, the number of weights (the dimensionality of w) is much less than the number of states and changing one weight changes the estimated value of many states. Consequently, when a single state is updated, the change generalizes from that state to affect the values of many other states. Such generalization makes the learning potentially more powerful but also potentially more diffcult to manage and understand. Moreover, making one state’s estimate more accurate invariably means making others’ less accurate. We are obligated then to say which states we care most about. We must specify a state distribution $\mu(s)\ge 0$, $\sum \mu(s) = 1$, representing how much we care about the error in each state s. PredictionIn case of prediction by the error in a state s, we mean the square of the difference between the approximate value $\hat{v}(s,w)$ and the true value $v_{\pi}(s)$. Weighting this over the state space by μ, we obtain a natural objective function, the Mean Squared Value Error$$VE(w) = \sum \mu(s)[v_{\pi}(s) - \hat{v}(s,w) ]^{2} $$Often $\mu(s)$ is chosen to be the fraction of time spent in s. Under on-policy training this is called the on-policy distribution. In continuing tasks, the on-policy distribution is the stationary distribution under $\pi$. For episodic tasks $$ \nu(s) = h(s) + \gamma \sum_{\overline{s}} \nu(\overline{s}) \sum_{a} \pi( a | \overline{s} ) \ p(s | \overline{s}, a) $$with $\nu(s)$ the average time steps spent in s for a single episode, $h(s)$ the probability that an episode starts in s. The on-policy distribution is then$$\mu(s) = \frac{\nu(s)}{\sum_{s'} \nu(s')}$$It is not completely clear that the VE is the right performance objective for rein- forcement learning. Remember that our ultimate purpose—the reason we are learning a value function—is to find a better policy. The best value function for this purpose is not necessarily the best for minimizing VE but until now there has not been found a better metric. Gradient descentWe sample the states and try to optimize w to get the examples correct. This means the strategy is to minimize the error on the observed examples. Stochastic gradient-descent (SGD) methods do this by adjusting the weight vector after each example by a small amount in the direction that would most reduce the error on that example:$$w_{t+1} = w_{t} - \frac{1}{2} \alpha \nabla [v_{\pi}(s_{t}) - \hat{v}(s_{t}, w_{t}) ]^{2}$$$$w_{t+1} = w_{t} + \alpha [v_{\pi}(s_{t}) - \hat{v}(s_{t}, w_{t}) ] \nabla \hat{v}(s_{t}, w_{t})$$where $\alpha$ is a positive step-size parameter and $\nabla f(w)$ is the column vector of partial derivative w.r.t. $w$. In most cases we will not have an example at time t of the true target value $v_{\pi}(s_{t})$, but an approximation $U_{t}$. If $U_{t}$ is unbiased for each t then $w_{t}$ is guaranteed to converge to a local optimum for decreasing $\alpha$. However, this is not guaranteed in case of a biased estimate (this is the case for bootstrapping estimates). They take into account the effect of changing the weight vector $w_{t}$ on the estimate, but ignore its effect on the target. These are called semi-gradient methods. Often these methods are preferred over gradient methods for following reasons. One, they typically enable significantly faster learning. Two, they enable learning to be continual and online without waiting until the end of an episode.TODO: add (semi-)gradient method to estimate v with MC. : add semi-gradient TD(o) method. State aggregation is a simple form of generalizing function approximation in which states are grouped together, with one estimated value (one component of the weight vector $w$) for each group. The value of a state is estimated as its group’s component, and when the state is updated, that component alone is updated. State aggregation is a special case of stochastic gradient descent in which the gradient, $\nabla \hat{v}(s_{t},w_{t})$, is 1 for $s_{t}$'s group’s component and 0 for the other components. For state aggregation the resulting learned function will have a typical staircasing effect. Linear methodsOne of the most important special cases of function approximation is that in which the approximate function, $\hat{v}(·,w)$, is a linear function of the weight vector, $w$. Linear methods approximate the state-value function by the inner product between $w$ and $x(s)$:$$\hat{v}(s,w) = w^{T}x = \sum_{i=1}^{d} w_{i} x_{i}$$The vector $x(s)$ is called a feature vector representing state s. For linear methods, features are basis functions because they form a linear basis for the set of approximate functions. Constructing d-dimensional feature vectors to represent states is the same as selecting a set of d basis functions. It is natural to use SGD updates with linear function approximation. The gradient of the approximate value function with respect to w in this case is$$\nabla \hat{v}(s, w) = x(s)$$In particular, in the linear case there is only one optimum (or, in degenerate cases, one set of equally good optima), and thus any method that is guaranteed to converge to or near a local optimum is automatically guaranteed to converge to or near the global optimum. Choosing features appropriate to the task is an important way of adding prior domain knowledge to reinforcement learning systems. A limitation of the linear form is that it cannot take into account any interactions between features, such as the presence of feature i being good only in the absence of feature j.To get some intuitive feel for how to set the step-size parameter $\alpha$ manually, it is best to go back momentarily to the tabular case. There we can understand that a step size of $\alpha = 1$ will result in a complete elimination of the sample error after one target. We usually want to learn slower than this. In the tabular case, a step size of $\alpha = \frac{1}{10}$ would take about 10 experiences to converge approximately to their mean target, and if we wanted to learn in 100 experiences we would use $\alpha = \frac{1}{100}$. In general, if $\alpha = \frac{1}{\tau}$, then the tabular estimate for a state will approach the mean of its targets, with the most recent targets having the greatest effect, after about $\tau$ experiences with the state. With general function approximation there is not such a clear notion of number of experiences with a state, as each state may be similar to and dissimilar from all the others to various degrees. However, there is a similar rule that gives similar behavior in the case of linear function approximation. Suppose you wanted to learn in about $\tau$ experiences with substantially the same feature vector. A good rule of thumb for setting the step-size parameter of linear SGD methods is then $$\alpha = (\tau \ E[x^{T}x])^{-1}$$where $x$ is a random feature vector chosen from the same distribution as input vectors will be in the SGD. This method works best if the feature vectors do not vary greatly in length. non-linear methodsArtificial neural networks (ANNs) are widely used for nonlinear function approximation. Training the hidden layers of an ANN is therefore a way to automatically create features appropriate for a given problem so that hierarchical representations can be produced without relying exclusively on hand-crafted features. This has been an enduring challenge for artificial intelligence and explains why learning algorithms for ANNs with hidden layers have received so much attention over the years. ANNs typically learn by a stochastic gradient method. Least squares TDDirectly calculates the TD fixed point:$$w_{t} = \hat{A}_{t}^{-1}b$$with $\hat{A}_{t} = \sum_{k=0}^{t-1} x_{k}(x_{k}- \gamma x_{k+1})^{T} +\epsilon I$ ($\epsilon$ needed to make sure it is invertible and $b = \sum_{k=0}^{t-1} R_{k+1} x_{k}$. Whether the greater data efficiency of LSTD is worth this computational expense depends on how large d is, how important it is to learn quickly, and the expense of other parts of the system. (O($d^{2}$) is still significantly more expensive than the O(d) of semi-gradient TD.) Memory based In memory based function approximation training examples are simply saved in memory as they arrive (or at least save a subset of the examples) without updating any parameters. Then, whenever a query state’s value estimate is needed, a set of examples is retrieved from memory and used to compute a value estimate for the query state. This approach is sometimes called lazy learning because processing training examples is postponed until the system is queried to provide an output. Unlike parametric methods, the approximating function’s form is not limited to a fixed parameterized class of functions, such as linear functions or polynomials, but is instead determined by the training examples themselves, together with some means for combining them to output estimated values for query states.There are many different memory-based methods depending on how the stored training examples are selected and how they are used to respond to a query. Here, we focus on local-learning methods that approximate a value function only locally in the neighborhood of the current query state. These methods retrieve a set of training examples from memory whose states are judged to be the most relevant to the query state, where relevance usually depends on the distance between states: the closer a training example’s state is to the query state, the more relevant it is considered to be, where distance can be defined in many different ways. After the query state is given a value, the local approximation is discarded. Examples of local approximation are nearest neighbor, weighted averaging and locally weighted regression. Because trajectory sampling is of such importance in reinforcement learning, memory-based local methods can focus function approximation on local neighborhoods of states (or state–action pairs) visited in real or simulated trajectories. There may be no need for global approximation because many areas of the state space will never (or almost never) be reached. In addition, memory-based methods allow an agent’s experience to have a relatively immediate effect on value estimates in the neighborhood of the current state, in contrast with a parametric method’s need to incrementally adjust parameters of a global approximation.Memory-based methods such as the weighted average and locally weighted regression methods described above depend on assigning weights to examples in the database depending on the distance between the example state and the query state. The function that assigns these weights is called a kernel function, or simply a kernel. Kernel functions numerically express how relevant knowledge about any state is to any other state. For many sets of feature vectors, kernel regression has a compact functional form that can be evaluated without any computation taking place in the d-dimensional feature space. In these cases, kernel regression is much less complex than directly using a linear parametric method with states represented by these feature vectors. This is the so-called “kernel trick” that allows effectively working in the high-dimension of an expansive feature space while actually working only with the set of stored training examples. The kernel trick is the basis of many machine learning methods, and researchers have shown how it can sometimes benefit reinforcement learning. Experience replay The system stores the data discovered for [state, action, reward, next_state]. The learning phase is then logically separate from gaining experience, and based on taking random samples from this data. You still want to interleave the two processes - acting and learning - because improving the policy will lead to different behaviour than should we don’t want the data we are feeding to be correlated with each other in any way. Random sampling of experiences breaks temporal correlation of behavior and distributes/averages it over many of its previous states. By doing so, we avoid significant oscillations or divergence in our model — problems that can arise from correlated data.Advantages of experience replay:- More efficient use of previous experience, by learning with it multiple times. This is key when gaining real-world experience is costly, you can get full use of it. Especially useful when there is low variance in immediate outcomes (reward, next state) given the same state, action pair.- Better convergence behaviour when training a function approximator. Partly this is because the data is more like i.i.d. data assumed in most supervised learning convergence proofs.Disadvantage of experience replay:- It is harder to use multi-step learning algorithms, such as $Q(\lambda)$, which can be tuned to give better learning curves by balancing between bias (due to bootstrapping) and variance (due to delays and randomness in long-term outcomes). Multi-step DQN with experience-replay DQN is one of the extensions explored in the paper Rainbow: Combining Improvements in Deep Reinforcement Learning.Use case experience learning on deep Q networks (DQN). Here a convolutional neural net is learned with pixels as input and Q values as output. Additionally, a buffer is kept for the experience replay where random data is sampled from. We want to minimize the difference between our current Q and target Q. $$ L_{i}(w_{i}) = \mathbb{E} [(R_{ss'}^{a} + \gamma \max_{a'} Q(s', a', \overline{w}_{i}) - Q(s,a, w_{i}))^{2}]$$SGD was then used to go to this minima. To get the non-linear approximator stable, two networks were used one that was learning and one that was feeding in the batches of sampled data. This is noted by $\overline{w}_{i}$, after a certain fixed time, the fixed $w_{i}$ were updated with the latest. Control episodicIn this case the approximation of the value function can be replaced with this from the q function. We choose the action where the q function is maximal for the current state. Policy improvement is then done by changing the estimation policy to a soft approximation of the greedy policy such as the $\epsilon$-greedy policy. Actions are selected according to this same policy. This only works well when the action set is discrete and not too large. ContinualLike the discounted setting, the average reward setting applies to continuing problems, problems for which the interaction between agent and environment goes on and on forever without termination or start states. Unlike that setting, however, there is no discounting—the agent cares just as much about delayed rewards as it does about immediate reward. The discounted setting is problematic with function approximation, and thus the average-reward setting is needed to replace it. To see why, consider an infinite sequence of returns with no beginning or end, and no clearly identified states. The states might be represented only by feature vectors, which may do little to distinguish the states from each other. As a special case, all of the feature vectors may be the same. Thus one really has only the reward sequence (and the actions), and performance has to be assessed purely from these. How could it be done? One way is by averaging the rewards over a long interval—this is the idea of the average-reward setting. How could discounting be used? Well, for each time step we could measure the discounted return. Some returns would be small and some big, so again we would have to average them over a sufficiently large time interval. In the continuing setting there are no starts and ends, and no special time steps, so there is nothing else that could be done. However, if you do this, it turns out that the average of the discounted returns is proportional to the average reward. In fact, for policy ⇡, the average of the discounted returns is always $r(\pi)/(1 - \gamma)$, that is, it is essentially the average reward, $r(\pi)$. In particular, the ordering of all policies in the average discounted return setting would be exactly the same as in the average-reward setting. It is no longer true that if we change the policy to improve the discounted value of one state then we are guaranteed to have improved the overall policy in any useful sense. That guarantee was key to the theory of our reinforcement learning control methods. With function approximation we have lost it!In fact, the lack of a policy improvement theorem is also a theoretical lacuna for thetotal-episodic and average-reward settings. Once we introduce function approximation we can no longer guarantee improvement for any setting. In the average-reward setting, the quality of a policy $\pi$ is defined as the average rate of reward, or simply average reward, while following that policy, which we denote as $r(\pi)$:$$ r(\pi) = \lim_{h \rightarrow \inf} \sum_{t=1}^{t=h}\mathbb{E}[R_{t} \ | \ S_{0} A_{0:t-1} \sim \pi] $$$$ r(\pi) = \sum_{s} \mu_{\pi}(s) \sum_{a} \pi(a \ | \ s) \sum_{s', r} p(s', r \ | \ s,a) $$Returns are now defined as differences w.r.t. the average reward:$$ G_{t} = R_{t+1} - r(\pi) + R_{t+2} - r(\pi) + R_{t+3} - r(\pi) + \ldots$$This results in the following algorithm for differential semi-gradient Sarsa:- Input: $\hat{q}$ a differentiable action-vale function paramerisation - Initialize: step sized $\alpha,\ \beta > 0 $, value-function weights $ w \in \mathbb{R}^{d}$, average reward estimate $\overline{R} \in \mathbb{R}$ arbitrarirely (e.g. 0), state S and action A- Loop for each step: - Take action A, observe R, S' - choose A' as a function of $\hat{q}(S', . , w)$ (e.g. $\epsilon-greedy$ - $ \delta = R - \overline{R} + \hat{q}(S', A', w) - \hat{q}(S', A, w)$ - $\overline{R} = \overline{R} + \beta \delta$ - w = w + \alpha \delta \nabla \hat{q}(S, A, w) - S = S' , A = A' off-policyThe tabular off-policy methods readily extend to semi-gradient algorithms, but these algorithms do not converge as robustly as they do under on-policy training. Recall that in off-policy learning we seek to learn a value function for a target policy $\pi$, given data due to a different behavior policy b. In the prediction case, both policies are static and given, and we seek to learn either state values or action values. In the control case, action values are learned, and both policies typically change during learning—⇡ being the greedy policy with respect to qˆ, and b being something more exploratory such as the $\epsilon$-greedy policy with respect to q.The challenge of off-policy learning can be divided into two parts, one that arises in the tabular case and one that arises only with function approximation. The first part of the challenge has to do with the target of the update (not to be confused with the target policy), and the second part has to do with the distribution of the updates. The techniques related to importance sampling deal with the first part; these may increase variance but are needed in all successful algorithms tabular and approximate. For the second part, because the distribution of updates in the off-policy case is not according to the on-policy distribution. The on-policy distribution is important to the stability of semi-gradient methods. Two general approaches have been explored to deal with this. One is to use importance sampling methods again, this time to warp the update distribution back to the on-policy distribution, so that semi-gradient methods are guaranteed to converge (in the linear case). The other is to develop true gradient methods that do not rely on any special distribution for stability. An example of this is TDC (TD(0) with gradient correction.the danger of instability and divergence arises whenever we combine all of the following three elements, making up what we call the deadly triad:- Function Approximation- Bootstrapping- Off-policy trainingIn particular, note that the danger is not due to control or to generalized policy iteration. Those cases are more complex to analyze, but the instability arises in the simpler prediction case whenever it includes all three elements of the deadly triad. The danger is also not due to learning or to uncertainties about the environment, because it occurs just as strongly in planning methods, such as dynamic programming, in which the environment is completely known. If any two elements of the deadly triad are present, but not all three, then instability can be avoided. Additionally wit off-policy convergence is not guaranteed to convergenge to the correct value when combined with function approximation. off-policy learning in combination with function approximation that converges in a quick fashion is currently still an active field of study. Function approximation on capture chess EnvironmentThere is a maximum of 25 moves, after that the environment resets.Our Agent only plays white.The Black player is part of the environment and returns random moves.The reward structure is not based on winning/losing/drawing but on capturing black pieces:- pawn capture: +1- knight capture: +3- bishop capture: +3- rook capture: +5- queen capture: +9Our state is represent by an 8x8x8 array- Plane 0 represents pawns- Plane 1 represents rooks- Plane 2 represents knights- Plane 3 represents bishops- Plane 4 represents queens- Plane 5 represents kings- Plane 6 represents 1/fullmove number (needed for markov property)- Plane 7 represents can-claim-drawWhite pieces have the value 1, black pieces are -1
###Code
from RLC.capture_chess.agent import RandomAgent, QExperienceReplayAgent, ReinforceAgent
from RLC.capture_chess.game import play_fixed_role, play_game, play_alternate_role
#from IPython.display import SVG, display
from os import path
def epsilon_zero(k):
return 0
random_agent = RandomAgent()
# linear network (8,8,8) env to (64,64) state space => 32768 weights.
# state aggregation or tile coding can be used to reduce the nr of weights used.
# The convolutional model uses 2 1x1 convulutions and takes the outer product of the resulting arrays.
# This results in only 18 trainable weights!
# Advantage: More parameter sharing -> faster convergence
# Disadvantage: Information gets lost -> lower performance
# key insights:
# more experience learning is more stable.
# more episodes between updating feeding network is also more stable, but learning decreases.
# difficult to see from td-errors if one model is better than another. (playing against each other works well.)
# linear model is harder to train because of all the parameters and takes longer.
# we will apply Q learning with a feeding and learning network with prioritized experience learning.
log_dir='logs/capture/q_learner'
saved_q_model = 'models/q_conv_it_500_feeding_20'
!rm -rf $log_dir # currently loading writer not supported
q_agent = QExperienceReplayAgent(network='conv', gamma=0.1, lr=0.07, c_feeding=20, log_dir = log_dir)
if not q_agent.load(saved_q_model):
play_fixed_role(q_agent, random_agent, 500)
# save model
q_agent.save(saved_q_model)
q_agent.set_learn(False)
q_agent.set_epsilon_function(epsilon_zero)
else:
# load model from file
print("loaded model from file")
q_agent.set_learn(False)
q_agent.set_epsilon_function(epsilon_zero)
# check against baseline (random player)
play_fixed_role(q_agent, random_agent, 100)
# we will apply the REINFORCE algorithm.
log_dir='logs/capture/reinforce_learner'
saved_reinforce_model = 'models/reinforce_conv_it_500'
!rm -rf $log_dir # currently loading writer not supported
reinforce_agent = ReinforceAgent(gamma=0.95, lr=0.1, log_dir = log_dir)
if not reinforce_agent.load(saved_reinforce_model):
play_fixed_role(reinforce_agent, random_agent, 500)
# save agent
reinforce_agent.save(saved_reinforce_model)
reinforce_agent.set_learn(False)
else:
print("loaded model")
reinforce_agent.set_learn(False)
# check against baseline (random player)
play_fixed_role(reinforce_agent, random_agent, 100)
play_fixed_role(reinforce_agent, random_agent, 100)
env = play_game(reinforce_agent, random_agent)
print(env.to_pgn())
###Output
[Event "?"]
[Site "?"]
[Date "????.??.??"]
[Round "?"]
[White "?"]
[Black "?"]
[Result "*"]
1. Nf3 g5 2. b3 Nc6 3. d4 d5 4. Kd2 Nb4 5. Na3 Qd7 6. c4 h5 7. Rb1 Nh6 8. Qc2 b5 9. e3 Rh7 10. Qd3 c5 11. Qe2 Na6 12. Rg1 Rh8 13. Kd1 Qb7 14. Ke1 e5 15. Rh1 f5 16. Rg1 Ng8 17. b4 Qb6 18. g4 Qb7 19. Qd3 Kf7 20. e4 fxe4 21. Qb3 Bh6 22. Rb2 Kg6 23. Bh3 Rh7 24. Nc2 Rh8 25. Qa4 Qc6 26. Rb3 Bf5 27. Nxe5+ Kg7 28. Kd2 Nc7 29. Nxc6 Rb8 30. Na3 Bh7 31. Ke2 a5 32. Rf3 cxd4 33. Rg2 Rd8 34. Rf4 Rd7 35. gxh5 Rd6 36. Kd1 Na6 37. Nb8 Rf6 38. bxa5 Kf8 39. Nxa6 bxa4 40. Rxg5 Ke8 41. Rxd5 Rb6 42. Rb5 e3 43. Nb4 Rb7 44. Bf5 Rg7 45. Bxh7 exf2 46. Bxg8 f1=N 47. Rxf1 Rhxg8 48. Bxh6 Rf8 49. Re1+ Kd7 50. Nc6 Ra8 51. Bxg7 Rxa5 *
|
research/notebooks/190922 Generating Structures From Predictions.ipynb | ###Markdown
The goal of this script will be to generate a function that, essentially, can take a coordinate tensor and a mapping between those coordiates and atom identifiers (names) and creates/writes a PDB file with that information.
###Code
import prody as pr
import numpy as np
import sys
import os
os.chdir("/home/jok120/protein-transformer/")
sys.path.append("/home/jok120/protein-transformer/scripts")
sys.path.append("/home/jok120/protein-transformer/scripts/utils")
import torch
from tqdm import tqdm
from prody import *
import numpy as np
from os.path import basename, splitext
import transformer.Models
import torch.utils.data
from dataset import ProteinDataset, paired_collate_fn, paired_collate_fn_with_len
from protein.Structure import generate_coords_with_tuples, generate_coords
from losses import inverse_trig_transform, copy_padding_from_gold, drmsd_loss_from_coords, mse_over_angles, combine_drmsd_mse
from protein.Sidechains import SC_DATA, ONE_TO_THREE_LETTER_MAP, THREE_TO_ONE_LETTER_MAP
from utils.structure_utils import onehot_to_seq
###Output
_____no_output_____
###Markdown
1. Load a model given a checkpoint (and maybe some args.)
###Code
def load_model(chkpt_path):
""" Given a checkpoint path, loads and returns the specified transformer model. Assumes """
chkpt = torch.load(chkpt_path)
model_args = chkpt['settings']
model_state = chkpt['model_state_dict']
model_args.postnorm = False
print(model_args)
the_model = transformer.Models.Transformer(model_args,
d_k=model_args.d_k,
d_v=model_args.d_v,
d_model=model_args.d_model,
d_inner=model_args.d_inner_hid,
n_layers=model_args.n_layers,
n_head=model_args.n_head,
dropout=model_args.dropout)
the_model.load_state_dict(model_state)
return the_model
model = load_model("data/checkpoints/casp12_30_ln_11_best.chkpt")
###Output
Namespace(batch_size=8, buffering_mode=1, chkpt_path='./data/checkpoints/casp12_30_ln_11', clip=1.0, cluster=False, combined_loss=True, cuda=True, d_inner_hid=32, d_k=12, d_model=64, d_v=12, d_word_vec=64, data='data/proteinnet/casp12_190809_30xsmall.pt', dropout=0, early_stopping=None, epochs=40, eval_train=False, learning_rate=1e-05, log=None, log_file='./data/logs/casp12_30_ln_11.train', lr_scheduling=False, max_token_seq_len=3303, n_head=8, n_layers=6, n_warmup_steps=1000, name='casp12_30_ln_11', no_cuda=False, optimizer='adam', postnorm=False, proteinnet=True, restart=False, rnn=False, save_mode='best', train_only=False, without_angle_means=False)
###Markdown
2. Load some data.
###Code
def get_data_loader(data_path, n=0, subset="test"):
""" Given a subset of a dataset as a python dictionary file to make predictions from,
this function selects n items at random from that dataset to predict. It then returns a DataLoader for those
items, along with a list of ids.
"""
data = torch.load(data_path)
data_subset = data[subset]
if n is 0:
train_loader = torch.utils.data.DataLoader(
ProteinDataset(
seqs=data_subset['seq'],
crds=data_subset['crd'],
angs=data_subset['ang'],
),
num_workers=2,
batch_size=1,
collate_fn=paired_collate_fn,
shuffle=False)
return train_loader, data_subset["ids"]
# We just want to predict a few examples
to_predict = set([s.upper() for s in np.random.choice(data_subset["ids"], n)]) # ["2NLP_D", "3ASK_Q", "1SZA_C"]
will_predict = []
ids = []
seqs = []
angs = []
crds = []
for i, prot in enumerate(data_subset["ids"]):
if prot.upper() in to_predict and prot.upper() not in will_predict:
seqs.append(data_subset["seq"][i])
angs.append(data_subset["ang"][i])
crds.append(data_subset["crd"][i])
ids.append(prot)
will_predict.append(prot.upper())
assert len(seqs) == n and len(angs) == n or (len(seqs) == len(angs) and len(seqs) < n)
data_loader = torch.utils.data.DataLoader(
ProteinDataset(
seqs=seqs,
angs=angs,
crds=crds),
num_workers=2,
batch_size=1,
collate_fn=paired_collate_fn,
shuffle=False)
return data_loader, ids
data_loader, ids = get_data_loader('data/proteinnet/casp12_190809_30xsmall.pt')
data_iter = iter(data_loader)
for i in range(8):
next(data_iter)
###Output
_____no_output_____
###Markdown
3. Use the model to make a prediction
###Code
device = torch.device('cpu')
src_seq, src_pos_enc, tgt_ang, tgt_pos_enc, tgt_crds, tgt_crds_enc = next(data_iter)
print(src_seq.shape)
tgt_ang_no_nan = tgt_ang.clone().detach()
tgt_ang_no_nan[torch.isnan(tgt_ang_no_nan)] = 0
pred = model(src_seq, src_pos_enc, tgt_ang_no_nan, tgt_pos_enc)
d_loss, d_loss_normalized, r_loss = drmsd_loss_from_coords(pred, tgt_crds, src_seq[:,1:], device,
return_rmsd=True)
m_loss = mse_over_angles(pred, tgt_ang[:,1:]).to('cpu')
c_loss = combine_drmsd_mse(d_loss, m_loss)
d_loss, r_loss, m_loss, c_loss
pred = inverse_trig_transform(pred).squeeze()
src_seq = src_seq.squeeze()
coords = generate_coords(pred, pred.shape[0],src_seq, device)
coords.shape, tgt_crds.shape
one_letter_seq = onehot_to_seq(src_seq[:,1:].squeeze().detach().numpy())
one_letter_seq
cur_map = get_13atom_mapping(one_letter_seq)
title = "0924g_pred.pdb"
ttitle = title.replace("pred", "true")
pdbc = PDB_Creator(coords.squeeze(), cur_map)
pdbc.save_pdb(title)
pdbc = PDB_Creator(tgt_crds.squeeze(), cur_map)
pdbc.save_pdb(ttitle)
###Output
PDB written to 0924g_pred.pdb.
PDB written to 0924g_true.pdb.
###Markdown
3b. Turn off teacher forcing for predicting 4. Create a mapping from input seq to atom name list
###Code
atom_map_13 = {}
for one_letter in ONE_TO_THREE_LETTER_MAP.keys():
atom_map_13[one_letter] = ["N", "CA", "C"] + list(SC_DATA[ONE_TO_THREE_LETTER_MAP[one_letter]]["predicted"])
atom_map_13[one_letter].extend(["PAD"]*(13-len(atom_map_13[one_letter])))
def get_13atom_mapping(seq):
mapping = []
for residue in seq:
mapping.append((ONE_TO_THREE_LETTER_MAP[residue], atom_map_13[residue]))
return mapping
###Output
_____no_output_____
###Markdown
5. Given a coordinate tensor and an atom mapping, create a PDB file
###Code
class PDB_Creator(object):
def __init__(self, coords, mapping, atoms_per_res=13):
self.coords = coords.detach().numpy()
self.mapping = mapping
self.atoms_per_res = atoms_per_res
self.format_str = "{:6s}{:5d} {:^4s}{:1s}{:3s} {:1s}{:4d}{:1s} {:8.3f}{:8.3f}{:8.3f}{:6.2f}{:6.2f} {:>2s}{:2s}"
self.atom_nbr = 1
self.res_nbr = 1
self.defaults = {"alt_loc": "",
"chain_id": "",
"insertion_code": "",
"occupancy": 1,
"temp_factor": 0,
"element_sym": "",
"charge": ""}
assert self.coords.shape[0] % self.atoms_per_res == 0, f"Coords is not divisible by {atoms_per_res}. {self.coords.shape}"
self.peptide_bond_full = np.asarray([[0.519, -2.968, 1.340], # CA
[2.029, -2.951, 1.374], # C
[2.654, -2.667, 2.392], # O
[2.682, -3.244, 0.300]]) # next-N
self.peptide_bond_mobile = np.asarray([[0.519, -2.968, 1.340], # CA
[2.029, -2.951, 1.374], # C
[2.682, -3.244, 0.300]]) # next-N
def get_oxy_coords(self, ca, c, n):
target_coords = np.array([ca, c, n])
t = calcTransformation(self.peptide_bond_mobile, target_coords)
aligned_peptide_bond = t.apply(self.peptide_bond_full)
return aligned_peptide_bond[2]
def coord_generator(self):
coord_idx = 0
while coord_idx < self.coords.shape[0]:
if coord_idx + self.atoms_per_res + 1 < self.coords.shape[0]:
next_n = self.coords[coord_idx + self.atoms_per_res + 1]
else:
# TODO: Fix oxygen placement for final residue
next_n = self.coords[-1] +np.array([1.2, 0, 0])
yield self.coords[coord_idx:coord_idx + self.atoms_per_res], next_n
coord_idx += self.atoms_per_res
def get_line_for_atom(self, res_name, atom_name, atom_coords, missing=False):
if missing:
occupancy = 0
else:
occupancy = self.defaults["occupancy"]
return self.format_str.format("ATOM",
self.atom_nbr,
atom_name,
self.defaults["alt_loc"],
res_name,
self.defaults["chain_id"],
self.res_nbr,
self.defaults["insertion_code"],
atom_coords[0],
atom_coords[1],
atom_coords[2],
occupancy,
self.defaults["temp_factor"],
atom_name[0],
self.defaults["charge"])
def get_lines_for_residue(self, res_name, atom_names, coords, next_n):
residue_lines = []
for atom_name, atom_coord in zip(atom_names, coords):
if atom_name is "PAD" or np.isnan(atom_coord).sum() > 0:
continue
# if np.isnan(atom_coord).sum() > 0:
# residue_lines.append(self.get_line_for_atom(res_name, atom_name, atom_coord, missing=True))
# self.atom_nbr += 1
# continue
residue_lines.append(self.get_line_for_atom(res_name, atom_name, atom_coord))
self.atom_nbr += 1
try:
oxy_coords = self.get_oxy_coords(coords[1], coords[2], next_n)
residue_lines.append(self.get_line_for_atom(res_name, "O", oxy_coords))
self.atom_nbr += 1
except ValueError:
pass
return residue_lines
def get_lines_for_protein(self):
self.lines = []
self.res_nbr = 1
self.atom_nbr = 1
mapping_coords = zip(self.mapping, self.coord_generator())
prev_n = torch.tensor([0,0,-1])
for (res_name, atom_names), (res_coords, next_n) in mapping_coords:
self.lines.extend(self.get_lines_for_residue(res_name, atom_names, res_coords, next_n))
prev_n = res_coords[0]
self.res_nbr += 1
return self.lines
def make_header(self, title):
return f"REMARK {title}"
def make_footer(self):
return "TER\nEND \n"
def save_pdb(self, path, title="test"):
self.get_lines_for_protein()
self.lines = [self.make_header(title)] + self.lines + [self.make_footer()]
with open(path, "w") as outfile:
outfile.write("\n".join(self.lines))
print(f"PDB written to {path}.")
def get_seq(self):
return "".join([THREE_TO_ONE_LETTER_MAP[m[0]] for m in self.mapping])
cur_map = get_13atom_mapping(one_letter_seq)
title = "0924f_pred.pdb"
ttitle = title.replace("pred", "true")
pdbc = PDB_Creator(coords.squeeze(), cur_map)
pdbc.save_pdb(title)
pdbc = PDB_Creator(tgt_crds.squeeze(), cur_map)
pdbc.save_pdb(ttitle)
# Align
# p = parsePDB(title)
# t = parsePDB(ttitle)
# print(t.getCoords().shape, p.getCoords().shape)
# tr = calcTransformation(t.getCoords(), p.getCoords())
# t.setCoords(tr.apply(t.getCoords()))
# writePDB(ttitle, t)
import prody
def do_a_prediction(title, data_iter):
src_seq, src_pos_enc, tgt_ang, tgt_pos_enc, tgt_crds, tgt_crds_enc = next(data_iter)
tgt_ang_no_nan = tgt_ang.clone().detach()
tgt_ang_no_nan[torch.isnan(tgt_ang_no_nan)] = 0
pred = model(src_seq, src_pos_enc, tgt_ang_no_nan, tgt_pos_enc)
# Calculate loss
d_loss, d_loss_normalized, r_loss = drmsd_loss_from_coords(pred, tgt_crds, src_seq, device,
return_rmsd=True)
m_loss = mse_over_angles(pred, tgt_ang).to('cpu')
# Generate coords
pred = inverse_trig_transform(pred).squeeze()
src_seq = src_seq.squeeze()
coords = generate_coords(pred, pred.shape[0],src_seq, device)
# Generate coord, atom_name mapping
one_letter_seq = onehot_to_seq(src_seq.squeeze().detach().numpy())
cur_map = get_13atom_mapping(one_letter_seq)
# Make PDB Creator objects
pdb_pred = PDB_Creator(coords.squeeze(), cur_map)
pdb_true = PDB_Creator(tgt_crds.squeeze(), cur_map)
# Save PDB files
pdb_pred.save_pdb(f"{title}_pred.pdb")
pdb_true.save_pdb(f"{title}_true.pdb")
# Align PDB files
p = parsePDB(f"{title}_pred.pdb")
t = parsePDB(f"{title}_true.pdb")
tr = calcTransformation(p.getCoords()[:-1], t.getCoords())
p.setCoords(tr.apply(p.getCoords()))
writePDB(f"{title}_pred.pdb", p)
print("Constructed PDB files for", title, ".")
do_a_prediction(8, data_iter)
prody.apps.prody_apps.prody_align
###Output
_____no_output_____ |
Andrew Ng - Coursera/Week 2/Multiple Features Regression.ipynb | ###Markdown
Multiple Features
###Code
import pandas as pd
import numpy as np
size =[2104,1416,1534,852]
nbr_bedrooms = [5,3,3,2]
nbr_floors = [1,2,2,1]
age = [45,40,30,36]
price = [460,232,315,178]
d = {'size':size,'nbr_bedrooms':nbr_bedrooms,'nbr_floors':nbr_floors,'age':age,'price':price}
df = pd.DataFrame(d)
###Output
_____no_output_____
###Markdown
1. Definitions
###Code
df
###Output
_____no_output_____
###Markdown
$y$ : the element to predict here is the **price** $x_n$ : the features used to predict y **(size, nbr of bedrooms, nbr of floors, age )** $n$ : number of features. here $n = 4$ $x^{(i)} : $ input of $i^{th}$ training example * **ex:** $x^{(2)}$, with $x^{(2)}$ being a vector
###Code
#(pandas is indexed at 0)
df.loc[1,['size','nbr_bedrooms','nbr_floors','age']]
###Output
_____no_output_____
###Markdown
$x^{(i)}_j$ : value of feature $j$ in $i^{th}$ training example * **ex:** $x^{(2)}_3$
###Code
#(pandas is indexed at 0)
df.iloc[2][2]
###Output
_____no_output_____
###Markdown
2. Hypothesis **General rule** $h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n$ **In our case** $h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + \theta_3x_3 + \theta_4x_4$ Writing the hypothesis formula in term on Matrices with $x_0 = 1$. which gives $\theta_0x_0 = \theta_0$ Features vector as $x$ Parameters vector as $\theta$ $ x = \begin{bmatrix}x_0 \\x_1 \\x_2 \\x_3 \\x_4 \end{bmatrix}$ $\space \space \space$ $\theta = \begin{bmatrix} \theta_0 \\ \theta_1 \\\theta_2 \\ \theta_3 \\ \theta_4 \end{bmatrix}$ To end up with our hypothesis formula, we can think of it as the product of 2 vectors. $x$ and the transpose of $\theta$ $ x = \begin{bmatrix}x_0 \\x_1 \\x_2 \\x_3 \\x_4 \end{bmatrix}$ $\space \space \space$ $\theta^T = \begin{bmatrix} \theta_0 & \theta_1 &\theta_2 & \theta_3 & \theta_4 \end{bmatrix}$ So we can resume the formula to :$h_\theta(x) = h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + ... + \theta_nx_n = \theta^Tx$ Hypothesis computing in Python
###Code
from sympy import *
X = Matrix(['x0','x1','x2','x3','x4'])
X
T = Matrix([['T0','T1','T2','T3','T4']])
T
H = T.multiply(X)
H
###Output
_____no_output_____ |
prepare.ipynb | ###Markdown
Alps - 3D GPS velocityThis is a compilation of 3D GPS velocities for the Alps.The horizontal velocities are reference to the Eurasian frame. All velocity components and even the position have error estimates,which is very useful and rare to find in a lot of datasets.**Source:** [Sánchez et al. (2018)](https://doi.org/10.1594/PANGAEA.886889).**License:** CC-BY-3.0 NotesHere, we download the data from 3 separate files (coordinates, vertical velocity, horizontal velocities) and make sure they are aligned and represent the same stations. There are some mistakes in the station names of horizontal velocity file that we fix manually (verified by the coordinates).
###Code
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import verde as vd
import pooch
import pyproj
import pygmt
###Output
_____no_output_____
###Markdown
Download the dataUse [Pooch](https://github.com/fatiando/pooch) to download the original data file to our computer.
###Code
fname_position = pooch.retrieve(
url="https://store.pangaea.de/Publications/Sanchez-etal_2018/ALPS2017_NEH.CRD",
known_hash="sha256:24b88a0e5ab6ea93c67424ef52542d8b8a8254a150284e1a54afddbfd93e4399",
)
fname_velocity = pooch.retrieve(
url="https://store.pangaea.de/Publications/Sanchez-etal_2018/ALPS2017_NEH.VEL",
known_hash="sha256:0f2eff87a39260e2b3218897763dbfecdf0f464bf877bef460eff34a70e00aa7",
)
fname_velocity_eurasia = pooch.retrieve(
url="https://store.pangaea.de/Publications/Sanchez-etal_2018/ALPS2017_REP.VEL",
known_hash="sha256:578677246230e893c828205391d262da4af39bb24a8ca66ff5a95a88c71fe509",
)
for fname in [fname_position, fname_velocity, fname_velocity_eurasia]:
print(f"size: {os.path.getsize(fname) / 1e6} Mb")
###Output
size: 0.03597 Mb
size: 0.036503 Mb
size: 0.013604 Mb
###Markdown
Read the dataThese data are in a strange format and getting pandas to read it would be more work than parsing it by hand. So that's what we're going to do.First, the horizontal velocities since there are less points and we will only want the vertical and positions of these stations.
###Code
station = []
velocity_north_mm_yr = []
velocity_north_error_mm_yr = []
velocity_east_mm_yr = []
velocity_east_error_mm_yr = []
velocity_up_mm_yr = []
velocity_up_error_mm_yr = []
with open(fname_velocity_eurasia, encoding="latin-1") as input_file:
for i, line in enumerate(input_file):
if i < 19 or not line.strip():
continue
columns = line.split()
station_id = columns[0]
# Fix these names manually.
# They were confirmed by comparing the coordinates.
if station_id == "CH1Z":
station_id = "CHIZ"
if station_id == "IE1G":
station_id = "IENG"
values = columns[3:]
station.append(station_id)
velocity_east_mm_yr.append(1e3 * float(values[0]))
velocity_north_mm_yr.append(1e3 * float(values[1]))
velocity_east_error_mm_yr.append(1e3 * float(values[2]))
velocity_north_error_mm_yr.append(1e3 * float(values[3]))
# Merge everything into a DataFrame.
# Use the station ID as the index to help us merge the data later.
data_horizontal = pd.DataFrame(
data={
"station_id": station,
"velocity_east_mmyr": velocity_east_mm_yr,
"velocity_north_mmyr": velocity_north_mm_yr,
"velocity_east_error_mmyr": velocity_east_error_mm_yr,
"velocity_north_error_mmyr": velocity_north_error_mm_yr,
},
index=station,
)
data_horizontal
###Output
_____no_output_____
###Markdown
Now load the position and vertical velocity, keeping only the points that have the horizontal components as well.
###Code
station = []
latitude = []
latitude_error_m = []
longitude = []
longitude_error_m = []
height_m = []
height_error_m = []
with open(fname_position, encoding="latin-1") as input_file:
for i, line in enumerate(input_file):
if i < 15 or i > 304 or not line.strip():
continue
columns = line.split()
if len(columns) == 12:
station_id = columns[0]
values = columns[2:8]
else:
station_id = columns[0]
values = columns[1:7]
# Only interested in the stations that have horizontal
if station_id not in data_horizontal.station_id:
continue
# Skip repeated stations because it's easier this way
if station_id in station:
continue
values = [float(x) for x in values]
# Make longitude be in [-180, 180] for easier plotting
if values[2] > 300:
values[2] -= 360
station.append(station_id)
latitude.append(values[0])
latitude_error_m.append(values[1])
longitude.append(values[2])
longitude_error_m.append(values[3])
height_m.append(values[4])
height_error_m.append(values[5])
# Merge everything into a DataFrame.
data_position = pd.DataFrame(
data={
"station_id": station,
"latitude": latitude,
"longitude": longitude,
"height_m": height_m,
"latitude_error_m": latitude_error_m,
"longitude_error_m": longitude_error_m,
"height_error_m": height_error_m,
},
index=station,
)
data_position
station = []
velocity_up_mm_yr = []
velocity_up_error_mm_yr = []
with open(fname_velocity, encoding="latin-1") as input_file:
for i, line in enumerate(input_file):
if i < 15 or i > 303 or not line.strip():
continue
columns = line.split()
if len(columns) == 12:
station_id = columns[0]
values = columns[6:8]
else:
station_id = columns[0]
values = columns[5:7]
# Only interested in the stations that have horizontal
if station_id not in data_horizontal.station_id:
continue
# Skip repeated stations because it's easier this way
if station_id in station:
continue
station.append(station_id)
velocity_up_mm_yr.append(1e3 * float(values[0]))
velocity_up_error_mm_yr.append(1e3 * float(values[1]))
# Merge everything into a DataFrame.
data_vertical = pd.DataFrame(
data={
"station_id": station,
"velocity_up_mmyr": velocity_up_mm_yr,
"velocity_up_error_mmyr": velocity_up_error_mm_yr,
},
index=station,
)
data_vertical
###Output
_____no_output_____
###Markdown
Merge all of the DataFrames into a single one.
###Code
data = pd.merge(pd.merge(data_horizontal, data_vertical), data_position)
data
###Output
_____no_output_____
###Markdown
Plot the data Make a quick plot to make sure the data look OK. This plot will be used as a preview of the dataset.
###Code
angle = np.degrees(np.arctan2(data.velocity_north_mmyr, data.velocity_east_mmyr))
length = np.hypot(data.velocity_north_mmyr, data.velocity_east_mmyr)
region = vd.pad_region(vd.get_region((data.longitude, data.latitude)), pad=1)
fig = pygmt.Figure()
with fig.subplot(
nrows=1,
ncols=2,
figsize=("35c", "15c"),
sharey="l", # shared y-axis on the left side
frame="WSrt",
):
with fig.set_panel(0):
fig.basemap(region=region, projection="M?", frame="af")
fig.coast(area_thresh=1e4, land="#eeeeee")
scale_factor = 2 / length.max()
fig.plot(
x=data.longitude,
y=data.latitude,
direction=[angle, length * scale_factor],
style="v0.2c+e",
color="blue",
pen="1.5p,blue",
)
# Plot a quiver caption
fig.plot(
x=-4,
y=42,
direction=[[0], [1 * scale_factor]],
style="v0.2c+e",
color="blue",
pen="1.5p,blue",
)
fig.text(
x=-4,
y=42.2,
text=f"1 mm/yr",
justify="BL",
font="10p,Helvetica,blue",
)
with fig.set_panel(1):
fig.basemap(region=region, projection="M?", frame="af")
fig.coast(area_thresh=1e4, land="#eeeeee")
pygmt.makecpt(cmap="polar", series=[data.velocity_up_mmyr.min(), data.velocity_up_mmyr.max()])
fig.plot(
x=data.longitude,
y=data.latitude,
color=data.velocity_up_mmyr,
style="c0.3c",
cmap=True,
pen="0.5p,black",
)
fig.colorbar(
frame='af+l"vertical velocity [mm/yr]"',
position="jTL+w7c/0.3c+h+o1/1",
)
fig.savefig("preview.jpg", dpi=200)
fig.show(width=1000)
###Output
_____no_output_____
###Markdown
This looks very similar to the plots in [Sánchez et al. (2018)](https://doi.org/10.5194/essd-10-1503-2018). ExportMake a separate DataFrame to export to a compressed CSV. The conversion is needed to specify the number of significant digits to preserve in the output. Setting this along with the LZMA compression can help reduce the file size considerably. Not all fields in the original data need to be exported.
###Code
export = pd.DataFrame({
"station_id": data.station_id,
"longitude": data.longitude.map(lambda x: "{:.7f}".format(x)),
"latitude": data.latitude.map(lambda x: "{:.7f}".format(x)),
"height_m": data.height_m.map(lambda x: "{:.3f}".format(x)),
"velocity_east_mmyr": data.velocity_east_mmyr.map(lambda x: "{:.1f}".format(x)),
"velocity_north_mmyr": data.velocity_north_mmyr.map(lambda x: "{:.1f}".format(x)),
"velocity_up_mmyr": data.velocity_up_mmyr.map(lambda x: "{:.1f}".format(x)),
"longitude_error_m": data.longitude_error_m.map(lambda x: "{:.4f}".format(x)),
"latitude_error_m": data.latitude_error_m.map(lambda x: "{:.4f}".format(x)),
"height_error_m": data.height_error_m.map(lambda x: "{:.3f}".format(x)),
"velocity_east_error_mmyr": data.velocity_east_error_mmyr.map(lambda x: "{:.1f}".format(x)),
"velocity_north_error_mmyr": data.velocity_north_error_mmyr.map(lambda x: "{:.1f}".format(x)),
"velocity_up_error_mmyr": data.velocity_up_error_mmyr.map(lambda x: "{:.1f}".format(x)),
})
export
###Output
_____no_output_____
###Markdown
Save the data to a file and calculate the size and MD5/SHA256 hashes.
###Code
output = "alps-gps-velocity.csv.xz"
export.to_csv(output, index=False)
print(f"file: {output}")
print(f"size: {os.path.getsize(output) / 1e6} Mb")
for alg in ["md5", "sha256"]:
print(f"{alg}:{pooch.file_hash(output, alg=alg)}")
###Output
file: alps-gps-velocity.csv.xz
size: 0.004544 Mb
md5:195ee3d88783ce01b6190c2af89f2b14
sha256:77f2907c2a019366e5f85de5aafcab2d0e90cc2c378171468a7705cab9938584
###Markdown
Read back the data and plot itVerify that the output didn't corrupt anything.
###Code
data_reloaded = pd.read_csv(output)
data_reloaded
###Output
_____no_output_____
###Markdown
Make the figure again but don't save it to a file this time.
###Code
projection = pyproj.Proj(proj="merc", lat_ts=data_reloaded.latitude.mean())
easting, northing = projection(data_reloaded.longitude, data_reloaded.latitude)
fig, axes = plt.subplots(
1, 2, figsize=(18, 6)
)
# Plot the horizontal velocity vectors
ax = axes[0]
ax.set_title("Horizontal velocities")
ax.quiver(
easting, northing,
data_reloaded.velocity_east_mmyr.values,
data_reloaded.velocity_north_mmyr.values,
scale=30,
width=0.002,
)
ax.set_aspect("equal")
ax.set_xlabel("easting (m)")
ax.set_ylabel("northing (m)")
# Plot the vertical velocity
ax = axes[1]
ax.set_title("Vertical velocity")
maxabs = vd.maxabs(data_reloaded.velocity_up_mmyr)
tmp = ax.scatter(
easting, northing,
c=data_reloaded.velocity_up_mmyr,
s=30,
vmin=-maxabs / 3,
vmax=maxabs / 3,
cmap="seismic",
)
plt.colorbar(tmp, ax=ax, label="mm/year", pad=0, aspect=50)
ax.set_aspect("equal")
ax.set_xlabel("easting (m)")
ax.set_ylabel("northing (m)")
plt.tight_layout(pad=0)
plt.show()
###Output
_____no_output_____
###Markdown
Training a model to count english syllables. Mostly adapted from https://www.kaggle.com/reppic/predicting-english-pronunciations
###Code
import json
import random
from collections import Counter, defaultdict
import numpy as np
import tensorflow as tf
from keras import callbacks, models, layers, optimizers
from matplotlib import pyplot as plt
from char_encoder import CharacterEncoder
CMUDICT_PATH = './syllable/cmudict/cmudict.dict'
word_map = {}
with open(CMUDICT_PATH) as f:
for line in f:
word, *pieces = line.split()
if '(' in word: # Alternate pronunciation
word = word[:word.index('(')]
syllables = sum(p[-1].isdigit() for p in pieces)
word_map.setdefault(word, []).append(syllables)
random.sample(list(word_map.items()), 10)
all_chars = ''.join(sorted({c for w in word_map for c in w}))
len(all_chars), all_chars
def plot_bar(some_dict, xlabel, ylabel, *, log=False, figsize=None, labelfunc=str):
fig, ax = plt.subplots(figsize=figsize)
ax.bar(some_dict.keys(), some_dict.values(), log=log)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks(range(max(some_dict.keys()) + 1))
for l, c in some_dict.items():
ax.text(l, c, labelfunc(c), ha='center', va='top', rotation='vertical', color='white')
plt.show()
word_len_distrib = Counter(len(w) for w in word_map)
plot_bar(word_len_distrib, 'Word length', 'Count', log=True, figsize=(11, 4))
syllables_distrib = Counter(c for cnts in word_map.values() for c in cnts)
plot_bar(syllables_distrib, 'Number of syllables', 'Count', log=True)
duplicates = 0
for cnts in word_map.values():
duplicates += len(cnts) - 1
duplicates, len(word_map), duplicates / len(word_map) * 100
# Decreasing the set of allowed chars and the max word len will decrease the size of the model.
EXCLUDED_CHARS = '.' # Abbreviations
ALLOWED_CHARS = [c for c in all_chars if c not in EXCLUDED_CHARS]
MAX_WORD_LEN = 18
word_list, syllable_list = [], []
for w, cnts in word_map.items():
if len(w) > MAX_WORD_LEN:
continue
if any(c in w for c in EXCLUDED_CHARS):
continue
word_list.append(w)
syllable_list.append(cnts[0]) # Drop alternate pronunciations ¯\_(ツ)_/¯
char_enc = CharacterEncoder(ALLOWED_CHARS)
x = np.array([char_enc.encode(w, MAX_WORD_LEN) for w in word_list])
y = np.array(syllable_list)
x.shape, y.shape
np.random.seed(42)
shuffled_idx = np.random.permutation(len(x))
x, y = x[shuffled_idx], y[shuffled_idx]
word_list = [word_list[i] for i in shuffled_idx]
split_at = len(x) - len(x) // 4
x_train, x_val = x[:split_at], x[split_at:]
y_train, y_val = y[:split_at], y[split_at:]
x_train.shape, x_val.shape, y_train.shape, y_val.shape
MODEL_DIR = './syllable/model_data/model'
CHARS_FILE = './syllable/model_data/chars.json'
def acc(y_true, y_pred):
"""Prediction accuracy"""
eq = tf.math.equal(y_true, tf.math.round(y_pred))
eq = tf.reshape(eq, [-1])
eq = tf.cast(eq, tf.int32)
return tf.math.divide(tf.math.reduce_sum(eq), tf.shape(y_true)[0])
model = models.Sequential()
model.add(layers.Input(shape=x_train.shape[1:]))
model.add(layers.Bidirectional(layers.GRU(16, return_sequences=True)))
model.add(layers.GRU(16))
model.add(layers.Dense(1))
opt = optimizers.Adam(learning_rate=0.001)
model.compile(loss='mse', optimizer=opt, metrics=[acc])
model.summary()
early_stop = callbacks.EarlyStopping(patience=3)
model.fit(x_train, y_train,
epochs=100,
validation_data=(x_val, y_val),
callbacks=[early_stop])
y_pred = model.predict(x)
acc_by_len, acc_by_syl = defaultdict(lambda: [0, 0]), defaultdict(lambda: [0, 0])
overall_ok = overall_total = 0
for w, y_i, y_pred_i in zip(word_list, y, y_pred):
y_pred_i = round(y_pred_i[0])
ok = y_i == y_pred_i
acc_by_len[len(w)][0] += ok
acc_by_len[len(w)][1] += 1
acc_by_syl[y_i][0] += ok
acc_by_syl[y_i][1] += 1
overall_ok += ok
overall_total += 1
acc_by_len = {x: y / z for x, (y, z) in acc_by_len.items()}
acc_by_syl = {x: y / z for x, (y, z) in acc_by_syl.items()}
fmt_percent = lambda x: f'{x * 100:.2f}%'
plot_bar(acc_by_len, 'Word length', 'Model accuracy', figsize=(11, 4), labelfunc=fmt_percent)
plot_bar(acc_by_syl, 'Number of syllables', 'Model accuracy', labelfunc=fmt_percent)
print('Overall accuracy:', fmt_percent(overall_ok / overall_total))
model.save(MODEL_DIR)
with open(CHARS_FILE, 'w') as f:
json.dump({'chars': ALLOWED_CHARS, 'maxlen': MAX_WORD_LEN}, f)
###Output
INFO:tensorflow:Assets written to: ./syllable/model_data/model/assets
###Markdown
MyBinder 環境でのみ必要な準備作業- 以降の、それぞれのセルにカーソルを合わせて、Shift + Enter キーを押すことで、そのセルのコードを実行することができます。
###Code
pip install -U scikit-learn
pip install -U pydotplus
conda install python-graphviz
pip install -U matplotlib
pip install -U tensorflow
###Output
_____no_output_____
###Markdown
MyBinder 環境でのみ必要な準備作業- 以降の、それぞれのセルにカーソルを合わせて、Shift + Enter キーを押すことで、そのセルのコードを実行することができます。
###Code
pip install -U scikit-learn
pip install -U pydotplus
conda install python-graphviz
###Output
_____no_output_____
###Markdown
Import data
###Code
data_id = 40701 # adult
data = prep.openmlwrapper(data_id=data_id, random_state=1, n_samples = 3000, verbose=True, scale=True, test_size=0.5)
###Output
Start preprocessing...
...Sampled 3000 samples from dataset 40701.
...Filled missing values.
...Decoded to original feature values.
...Scaled data.
Preprocessing done.
###Markdown
Split test data further* train: - true class known - used during training* test: - true class known - not used during training* application: - true class "unknown" - not used during training
###Code
data['X_test_pre'], data['X_test_post'], data['y_test_pre'], data['y_test_post'] = train_test_split(data['X_test'],
data['y_test'].reset_index(drop=True),
random_state=1,
test_size=0.5)
###Output
_____no_output_____
###Markdown
Train classifier
###Code
clf = RandomForestClassifier(n_estimators = 100, n_jobs=-2, random_state=1)
clf.fit(data['X_train'], np.array(data['y_train']).ravel())
print('Training Accuracy: %.2f' % clf.score(data['X_train'], data['y_train']))
print('Test Accuracy: %.2f' % clf.score(data['X_test_pre'], data['y_test_pre']))
print()
print('Application Accuracy: %.3f' % clf.score(data['X_test_post'], data['y_test_post']))
y_app_score = [i[1] for i in clf.predict_proba(data['X_test_post'])]
print('Application AUC: %.3f' % roc_auc_score(y_true=data['y_test_post']['class'].ravel(), y_score=y_app_score))
print('Application Brier: %.3f' % brier_score_loss(y_true=data['y_test_post']['class'].ravel(), y_prob=y_app_score))
###Output
Training Accuracy: 1.00
Test Accuracy: 0.96
Application Accuracy: 0.967
Application AUC: 0.964
Application Brier: 0.038
###Markdown
Create casebase and alerts* The case base consists of instances from the training dataset and test dataset.* The alert data consists of instances from the application dataset for which the model predicted a positive.
###Code
pre_indices = data['X_test_pre'].index
post_indices = data['X_test_post'].index
# Case Base
data['X_base'] = pd.concat([data['X_train'], data['X_test_pre']]).reset_index(drop=True)
data['y_base'] = pd.concat([data['y_train'], data['y_test_pre']]).reset_index(drop=True)
data['X_base_decoded'] = pd.concat([data['X_train_decoded'],
data['X_test_decoded'].reset_index(drop=True).iloc[pre_indices]]
).reset_index(drop=True)
# Alerts
y_test_post_pred = pd.DataFrame({'prediction' : clf.predict(data['X_test_post'])})
y_test_post_pred['index'] = data['y_test_post'].index
y_test_post_pred = y_test_post_pred.set_index('index')
alert_indices = y_test_post_pred[y_test_post_pred['prediction']==1].index
#alert_indices = y_test_post_pred.index
data['X_alert'] = data['X_test_post'].copy().loc[alert_indices].reset_index(drop=True)
data['y_alert'] = data['y_test_post'].copy().loc[alert_indices].reset_index(drop=True)
data['X_alert_decoded'] = data['X_test_decoded'].reset_index(drop=True).loc[alert_indices].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Retrieve metadata* Retrieve prediction probabilities (case base + alerts)* Retrieve historical performance (case base)
###Code
# Compute prediction probabilities
y_base_score = [i[1] for i in clf.predict_proba(data['X_base'])]
y_alert_score = [i[1] for i in clf.predict_proba(data['X_alert'])]
# Compute performance for cases in de case base
y_base_pred = clf.predict(data['X_base'])
base_performance = []
for pred, true in zip(y_base_pred, data['y_base'].values.ravel()):
if (pred==1) and (true==1):
base_performance.append('TP')
elif (pred==1) and (true==0):
base_performance.append('FP')
elif (pred==0) and (true==0):
base_performance.append('TN')
elif (pred==0) and (true==1):
base_performance.append('FN')
# gather metadata
meta_base = pd.DataFrame({'performance' : base_performance, 'score' : y_base_score})
meta_alert = pd.DataFrame({'score' : y_alert_score})
###Output
_____no_output_____
###Markdown
Compute SHAP
###Code
explainer = shap.TreeExplainer(clf)
SHAP_base = pd.DataFrame(explainer.shap_values(X=data['X_base'])[1], columns=list(data['X_base']))
SHAP_alert = pd.DataFrame(explainer.shap_values(X=data['X_alert'])[1], columns=list(data['X_alert']))
print('Explained.')
###Output
Explained.
###Markdown
Save files
###Code
# Set jobs to 1
clf.set_params(n_jobs=1)
# Save classifier
with open(os.getcwd() + '/data/clf.pickle', 'wb') as handle:
pickle.dump(clf, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Save case base
data['X_base'].to_csv(os.getcwd() + '/data/X_base.csv', index=False)
data['X_base_decoded'].to_csv(os.getcwd() + '/data/X_base_decoded.csv', index=False)
meta_base.to_csv(os.getcwd() + '/data/meta_base.csv', index=False)
SHAP_base.to_csv(os.getcwd() + '/data/SHAP_base.csv', index=False)
data['y_base'].to_csv(os.getcwd() + '/data/y_base.csv', index=False)
# Save alerts
data['X_alert'].to_csv(os.getcwd() + '/data/X_alert.csv', index=False)
data['X_alert_decoded'].to_csv(os.getcwd() + '/data/X_alert_decoded.csv', index=False)
meta_alert.to_csv(os.getcwd() + '/data/meta_alert.csv', index=False)
SHAP_alert.to_csv(os.getcwd() + '/data/SHAP_alert.csv', index=False)
data['y_alert'].to_csv(os.getcwd() + '/data/y_alert.csv', index=False)
# Save training data separately
data['X_train'].to_csv(os.getcwd() + '/data/X_train.csv', index=False)
print('Saved!')
###Output
Saved!
###Markdown
Add Policies to the Execution Role * In this sample code, we are going to use several AWS services. Therefore we have to add policies to the notebook execution role. * Regarding to role and policy, please refer to documents [1](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) and [2](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html)
###Code
import boto3
from sagemaker import get_execution_role
role_name = get_execution_role().split('/')[-1]
iam = boto3.client("iam")
print(role_name)
policy_arns = [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess",
"arn:aws:iam::aws:policy/AmazonTextractFullAccess"
]
for p in policy_arns:
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = p
)
###Output
_____no_output_____
###Markdown
Alternate Docker Storage Location * docker overlay directory usually will occupy large amount of disk space, change the location to EBS volume
###Code
%%writefile daemon.json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-shm-size": "4096M"
}
%%bash
sudo service docker stop
sudo cp daemon.json /etc/docker/
mkdir ~/SageMaker/docker_disk
sudo mv /var/lib/docker ~/SageMaker/docker_disk/
sudo ln -s ~/SageMaker/docker_disk/docker/ /var/lib/
sudo service docker start
%%bash
cd ~/SageMaker
git clone https://github.com/aws-samples/amazon-textract-code-samples.git
wget -O Mmdetection.zip https://tinyurl.com/yfp7z4n6
wget -O icdar_table_cells_dataset.zip https://tinyurl.com/yftec3qv
unzip Mmdetection.zip
unzip icdar_table_cells_dataset.zip
cp Mmdetection/new_chunk_cascade_mask_rcnn_hrnetv2p_w32_20e/epoch_36.pth CascadeTabNet/
###Output
_____no_output_____
###Markdown
PreparationThis notebook prepares the dataset for description generation1. Copy one or multiple excel files to the "dataset/raw" folder in the drive.2. Make sure that the first row the files contains the lables of the columns.3. Make sure all the tags (values) are already translated to English4. Make sure the sheet titles are consistent with the value defined in the read_csv function. (default: 'BatchImport')This notebook is compatible with raw data from Griffati and Brandsdistribution catalogs. For other datasets and formats, minor changes may be needed.
###Code
# Mount the drive
from google.colab import drive
drive.mount('/content/drive')
!rm -r /content/drive/MyDrive/dataset/test
!rm -r /content/drive/MyDrive/dataset/ref
!rm -r /content/drive/MyDrive/dataset/gen
!mkdir /content/drive/MyDrive/dataset/test
!mkdir /content/drive/MyDrive/dataset/ref
!mkdir /content/drive/MyDrive/dataset/gen
import pandas as pd
from pandas import read_excel
from pathlib import Path
import numpy as np
import lxml.html
import string
import os
import re
def read_batch(read_dir,limit):
dfs = []
c=0
for path in os.listdir(read_dir):
full_path = os.path.join(read_dir, path)
if os.path.isfile(full_path):
dfs.append(read_csv(read_dir,path))
c+=1
print("read file #"+ str(c))
for i in range(len(dfs)):
dfs[i] = dfs[i].sample(min(limit,len(dfs[i].index)))
df = pd.concat(dfs)
#print(len(df.index))
df.reset_index(drop=True, inplace=True)
return df
def read_csv(path,file_name):
my_sheet = 'BatchImport'
#file_name = '02_batch_import_Dior.xlsx'
df = read_excel(Path(path,file_name), sheet_name = my_sheet,keep_default_na=False)
return clean(df)
def clean(df):
df["material"] = ""
df["description_en"] = df["description-en"]
#name and category were removed.
to_keep=["brand","code","madein","subcategory","season",
"color","bicolors","gender","neckline","neck_shirt","sleeves","pattern","fastening","sole","pockets","description_en","dimensions","material"
,"neck","sleeve"]
to_drop=[]
for col in df.columns:
if col not in to_keep:
to_drop.append(col)
df.drop(to_drop, inplace=True, axis=1)
df.drop(df[df.description_en==""].index, inplace=True)
df["description_en"] = df["description_en"].apply(erase_tags)
df = add_features(df)
return df
def erase_tags(st):
st = lxml.html.fromstring(st).text_content()
st = re.sub(r"(\w)([A-Z])", r"\1 \2", st)
return re.sub(r"\s+", " ",st)
def separate_words(st):
return re.sub(r"(\w)([A-Z])", r"\1 \2", st)
###Output
_____no_output_____
###Markdown
If more features are required to be added, use https://github.com/niyoushanajmaei/product_description_process/blob/main/extract.py for relevant functions.
###Code
def add_features(df):
to_delete=[]
materials = []
for index, row in df.iterrows():
# A version of the description, all low case, all punctuations removed
# The possible words are separated after removing the punctuations
desc_procs = row["description_en"].lower().translate(str.maketrans('', '', string.punctuation))
desc_procs = separate_words(desc_procs)
material = add_material(desc_procs)
materials.append(str(material))
if (material == "[]") :
to_delete.append(index)
#print(material)
df["material"] = materials
#print(df.material.to_string(index = False))
print(f"deleted {len(to_delete)} rows. remaining: {len(df.index)}")
return df
def add_material(desc):
all_materials = ["canvas","cashmere","chenille","chiffon","cotton","crêpe","crepe","damask","georgette","gingham","jersey",
"lace","leather","linen","wool","modal","muslin","organza","polyester","satin","silk","spandex","suede","taffeta",
"toile","tweed","twill","velvet","viscose","synthetic matrials"]
materials = []
for m in all_materials:
if m in desc:
materials.append(m)
return materials
def clean_txt(st):
st = st.replace('"','')
st = st.replace('[','')
st = st.replace(']','')
st = st.replace("'",'')
st = st.strip()
return st
def write(df,write_dir):
path = write_dir + "test/"
ref_path = write_dir + "ref/"
c=0
#print(df.material.to_string(index = False))
data= df.to_dict('index')
#write the test set with and without the lables to have a reference
for k,value in data.items():
value = {k:v for k,v in value.items() if str(v)!= '' and str(v).strip() != '' and str(v)!='nan' and str(v)!='null' and str(v)!= '[]'}
write_dict(value,ref_path+"product"+str(c)+".txt","n")
c+=1
c=0
for k,value in data.items():
value = {k:v for k,v in value.items() if str(v)!= '' and str(v).strip() != '' and str(v)!='nan' and str(v)!='null'and str(v)!= '[]'}
write_dict(value,path+"product"+str(c)+".txt","t")
c+=1
print("writing successful")
# writes the file with format:
# when type in "n" for normal
# {"tag1" : "value1", "tag2": "value2", ....} \n description: "description_en" \n ### \n
# when type is "t" for test
# {"tag1" : "value1", "tag2": "value2", ....} \n description:
def write_dict(dict, path, type):
desc = dict.pop("description_en", None)
code = dict.pop("code",None)
with open(path, 'w') as f:
txt = ""
if type == "n":
txt += f"code: {code}\n"
txt += f"features: {str(dict)} \ndescription: "
txt = clean_txt(txt)
if type != 'n':
print(txt,file =f,end = '')
if type == "n":
txt += desc + "\n###\n"
print(txt,file =f)
#make an empty checkpoint file used in the generation notebook
def ckpt_file(dir):
with open(dir+"checkpoint.txt","w") as f:
pass
###Output
_____no_output_____
###Markdown
Change the limit of products chosen from each excel file if needed
###Code
!rm -r /content/drive/MyDrive/dataset/ref/
!rm -r /content/drive/MyDrive/dataset/test/
!mkdir /content/drive/MyDrive/dataset/ref/
!mkdir /content/drive/MyDrive/dataset/test/
limit = 1000
read_dir = "/content/drive/MyDrive/dataset/raw/"
write_dir = "/content/drive/MyDrive/dataset/"
write(read_batch(read_dir,limit),write_dir)
ckpt_file(write_dir)
###Output
_____no_output_____
###Markdown
Add Policies to the Execution Role * In this sample code, we are going to use several AWS services. Therefore we have to add policies to the notebook execution role. * Regarding to role and policy, please refer to documents [1](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) and [2](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html)
###Code
import boto3
from sagemaker import get_execution_role
role_name = get_execution_role().split('/')[-1]
iam = boto3.client("iam")
print(role_name)
policy_arns = ["arn:aws:iam::aws:policy/AmazonSQSFullAccess",
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess",
"arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator",
"arn:aws:iam::aws:policy/AmazonSNSFullAccess",
"arn:aws:iam::aws:policy/AmazonEventBridgeFullAccess",
"arn:aws:iam::aws:policy/AWSLambda_FullAccess"]
for p in policy_arns:
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = p
)
###Output
_____no_output_____
###Markdown
Alternate Docker Storage Location * docker overlay directory usually will occupy large amount of disk space, change the location to EBS volume
###Code
%%bash
sudo service docker stop
mkdir ~/SageMaker/docker_disk
sudo mv /var/lib/docker ~/SageMaker/docker_disk/
sudo ln -s ~/SageMaker/docker_disk/docker/ /var/lib/
sudo service docker start
###Output
_____no_output_____
###Markdown
Add Policies to the Execution Role * In this sample code, we are going to use several AWS services. Therefore we have to add policies to the notebook execution role. * Regarding to role and policy, please refer to documents [1](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) and [2](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html)
###Code
import boto3
from sagemaker import get_execution_role
role_name = get_execution_role().split('/')[-1]
iam = boto3.client("iam")
print(role_name)
policy_arns = ["arn:aws:iam::aws:policy/AmazonSQSFullAccess",
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess",
"arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator",
"arn:aws:iam::aws:policy/AmazonSNSFullAccess",
"arn:aws:iam::aws:policy/AmazonEventBridgeFullAccess",
"arn:aws:iam::aws:policy/AWSLambda_FullAccess"]
for p in policy_arns:
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = p
)
###Output
AmazonSageMaker-ExecutionRole-20210702T211675
###Markdown
Alternate Docker Storage Location * docker overlay directory usually will occupy large amount of disk space, change the location to EBS volume
###Code
!rm -r /home/ec2-user/SageMaker/docker_disk/docker
!sudo rm -r /home/ec2-user/SageMaker/docker_disk/
!docker ps -a
%%bash
sudo service docker stop
mkdir ~/SageMaker/docker_disk
sudo mv /var/lib/docker ~/SageMaker/docker_disk/
sudo ln -s ~/SageMaker/docker_disk/docker/ /var/lib/
sudo service docker start
!sudo docker system prune -a -f
!df
!df
!df
base_img='763104351884.dkr.ecr.'$region'.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu110-ubuntu18.04'
echo 'base_img:'$base_img
docker pull $base_img
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} -f Dockerfile --build-arg BASE_IMG="${base_img}" . --no-cache
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
!curl -X POST -H 'content-type: application/octet-stream' \
-H 'x-api-key: 0B22878B03FE197EF8D6' \
--data-binary @./Final_Training_Dataset/train/train_00001.wav \
'https:///rbrdok3cva.execute-api.us-west-2.amazonaws.com/dev/classify'
from datetime import datetime, timedelta, timezone
print(datetime.now(timezone(timedelta(hours=8))))
!curl -X POST -H 'content-type: application/octet-stream' \
-H 'x-api-key: 0B22878B03FE197EF8D6' \
--data-binary @./Final_Training_Dataset/train/train_00001.wav \
'https:///rbrdok3cva.execute-api.us-west-2.amazonaws.com/dev/classify'
print(datetime.now(timezone(timedelta(hours=8))))
###Output
2021-07-16 15:39:49.462996+08:00
Warning: Couldn't read data from file
Warning: "./Final_Training_Dataset/train/train_00001.wav", this makes an empty
Warning: POST.
{"errorMessage": "An error occurred (ValidationError) when calling the InvokeEndpoint operation: 1 validation error detected: Value at 'body' failed to satisfy constraint: Member must not be null", "errorType": "ValidationError", "stackTrace": [" File \"/var/task/lambda_function.py\", line 26, in lambda_handler\n Body=payload)\n", " File \"/var/runtime/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"]}2021-07-16 15:39:50.839498+08:00
###Markdown
The end result of this exercise should be a file named prepare.py. Using your store items data:
###Code
df = get_complete_data()
df.head()
#should probably clean up df
#will return
###Output
_____no_output_____
###Markdown
Convert date column to datetime format.
###Code
# Lets convert the 'Date' column in our df to pandas datetime object using pd.to_datetime()
df.sale_date = pd.to_datetime(df.sale_date)
df.sale_date
###Output
_____no_output_____
###Markdown
Plot the distribution of sale_amount and item_price.
###Code
# y= sale_amount, x = item_price
plt.scatter(y=df.sale_amount, x=df.item_price)
plt.title('Sale Amount vs. Item Price')
plt.xlabel('Item Price - Cost to Buy')
plt.ylabel('Sale Amount - Items Sold')
###Output
_____no_output_____
###Markdown
Set the index to be the datetime variable.
###Code
df = df.set_index("sale_date").sort_index()
###Output
_____no_output_____
###Markdown
Add a 'month' and 'day of week' column to your dataframe.
###Code
df['month'] = df.index.month
df['weekday'] = df.index.day_name()
df.head(1)
###Output
_____no_output_____
###Markdown
Add a column to your dataframe, sales_total, which is a derived from sale_amount (total items) and item_price.
###Code
df['sales_total'] = df.sale_amount * df.item_price
df.head(1)
###Output
_____no_output_____
###Markdown
Using the OPS data acquired in the Acquire exercises opsd_germany_daily.csv, complete the following: Convert date column to datetime format.
###Code
ops_df = get_germany_data()
ops_df.head()
ops_df.info()
ops_df.Date = pd.to_datetime(ops_df.Date)
ops_df.head()
###Output
_____no_output_____
###Markdown
Plot the distribution of each of your variables.
###Code
ops_df.hist()
sns.pairplot(ops_df)
###Output
_____no_output_____
###Markdown
Set the index to be the datetime variable.
###Code
ops_df = ops_df.set_index("Date").sort_index()
###Output
_____no_output_____
###Markdown
Add a month and a year column to your dataframe.
###Code
ops_df['month'] = ops_df.index.month
ops_df['year'] = ops_df.index.year
ops_df.head()
###Output
_____no_output_____
###Markdown
Fill any missing values.
###Code
ops_df = ops_df.fillna(0)
ops_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Make sure all the work that you have done above is reproducible. That is, you should put the code above into separate functions and be able to re-run the functions and get the same results.
###Code
def set_index(df, date_col):
df['date_col'] = pd.to_datetime(df['date_col'])
df = df.set_index('date_col').sort_index()
return df
def visualize(df, x, y, title):
plt.scatter(x=x, y=y)
plt.title('title')
plt.show()
sns.pairplot(df)
def sales_total():
df['sales_total'] = df.sale_amount * df.item_price
return df
###Output
_____no_output_____
###Markdown
CHANGEME: Location - Data typeCHANGEME: A few sentences about the data and links to the original data providers.**Source:** CHANGEME**License:** CHANGEME (include a link to where the license is stated if possible) NotesCHANGEME: Any relevant notes about this dataset, such as data format, original coordinate systems, or anything else that's relevant.
###Code
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import verde as vd
import pooch
###Output
_____no_output_____
###Markdown
Download the dataUse [Pooch](https://github.com/fatiando/pooch) to download the original data file to our computer.
###Code
fname = pooch.retrieve(
url="https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_mlo.txt",
known_hash="sha256:01b7df0113305e207c25df77e98ee9bd4477d0ba127b9b78594dd720f5f973ed",
)
print(f"size: {os.path.getsize(fname) / 1e6} Mb")
###Output
size: 0.056427 Mb
###Markdown
Read the dataUse pandas to read the data.
###Code
data = pd.read_csv(
fname,
comment="#",
delim_whitespace=True,
names="year month year_decimal monthly_average deseasonalized number_of_days std uncertainty".split(),
)
data
###Output
_____no_output_____
###Markdown
Plot the data Make a quick plot to make sure the data look OK. This plot will be used as a preview of the dataset.
###Code
plt.figure()
plt.plot(data.year_decimal, data.monthly_average)
plt.grid()
plt.xlabel("Year")
plt.ylabel("Monthly mean CO2 at Mauna Loa")
plt.savefig("preview.jpg")
plt.show()
###Output
_____no_output_____
###Markdown
ExportMake a separate DataFrame to export to a compressed CSV. The conversion is needed to specify the number of significant digits to preserve in the output. Setting this along with the LZMA compression can help reduce the file size considerably. Not all fields in the original data need to be exported.
###Code
export = pd.DataFrame({
"year_decimal": data.year_decimal.map(lambda x: "{:.4f}".format(x)),
"monthly_average": data.monthly_average.map(lambda x: "{:.2f}".format(x)),
})
export
###Output
_____no_output_____
###Markdown
Save the data to a file and calculate the size and MD5/SHA256 hashes.
###Code
output = "mauna-loa-co2.csv.xz"
export.to_csv(output, index=False)
print(f"file: {output}")
print(f"size: {os.path.getsize(output) / 1e6} Mb")
for alg in ["md5", "sha256"]:
print(f"{alg}:{pooch.file_hash(output, alg=alg)}")
###Output
file: mauna-loa-co2.csv.xz
size: 0.002132 Mb
md5:7095047376c4983cca627e52aa5b28de
sha256:10fa809c3d2e27764543298b3677a727ac833b15c5ba1830e481e5df9a341d78
###Markdown
Read back the data and plot itVerify that the output didn't corrupt anything.
###Code
data_reloaded = pd.read_csv(output)
data_reloaded
###Output
_____no_output_____
###Markdown
Make the figure again but don't save it to a file this time.
###Code
plt.figure()
plt.plot(data_reloaded.year_decimal, data_reloaded.monthly_average)
plt.grid()
plt.xlabel("Year")
plt.ylabel("Monthly mean CO2 at Mauna Loa")
plt.show()
###Output
_____no_output_____
###Markdown
Load data
###Code
wordembedding = np.load('./origin_data/vec.npy')
test_word = np.load('./data/testall_word.npy')
###Output
_____no_output_____
###Markdown
Add entity tag
###Code
def _add_entity_tag(row):
token_sen = row['sen'].split()
out_token_sen = copy.deepcopy(token_sen)
update_list_e1 = []
update_list_e2 = []
for i, j in enumerate(token_sen):
if j == row['e1']:
tmp = i+len(update_list_e1)+len(update_list_e2)
out_token_sen.insert(tmp, '<e1>')
out_token_sen.insert(tmp+2, '</e1>')
update_list_e1.append(tmp)
update_list_e1.append(tmp+2)
if j == row['e2']:
tmp = i+len(update_list_e1)+len(update_list_e2)
update_list_e2.append(tmp)
update_list_e2.append(tmp+2)
out_token_sen.insert(tmp, '<e2>')
out_token_sen.insert(tmp+2, '</e2>')
temp_row = copy.deepcopy(row)
temp_row['sen'] = ' '.join(out_token_sen)
return ' '.join(out_token_sen), temp_row
# Function verification
print(_add_entity_tag(full_data.iloc[0]))
def prepare_bert_data(dataPath):
full_data = pd.read_csv(dataPath, header=None, sep='\t').iloc[:, 2:]
full_data.columns = ['e1', 'e2', 'rel', 'sen']
tagged_sen = []
row_list = []
with tqdm(total=len(full_data)) as pbar:
for _, row in full_data.iterrows():
temp_sen, temp_row = _add_entity_tag(row)
tagged_sen.append(temp_sen)
if len(temp_row['sen'].split())<512:
row_list.append(temp_row)
pbar.update(1)
full_data.drop(columns='sen')
full_data['seq'] = tagged_sen
full_data = full_data.fillna(value='UNK')
cleaned_df = pd.DataFrame(row_list)
cleaned_df = cleaned_df.fillna(value='UNK')
cleaned_df = cleaned_df.iloc[:, 2:]
cleaned_df.to_csv(dataPath[:-4]+'_filtered.txt', index=False, sep='\t')
full_data.to_csv(dataPath[:-4]+'_bert.txt', index=False, sep='\t')
def _clean_text(dataPath):
output = []
with open(dataPath, 'r') as origin_file:
baselen = 0
n_line = 1
for line in origin_file.readlines():
line = line.strip()
token = line.split('\t')
if baselen == 0:
baselen = len(token)
else:
if len(token) != baselen:
print(token)
print(n_line)
n_line += 1
temp = '\t'.join(token[:6])+'\n'
output.append(temp)
os.rename(dataPath, dataPath[:-4]+'_original.txt')
with open(dataPath, 'w') as outfile:
outfile.writelines(output)
prepare_bert_data('./origin_data/test.txt')
tagged_sen = []
with tqdm(total=len(full_data)) as pbar:
for _, row in full_data.iterrows():
tagged_sen.append(_add_entity_tag(row))
pbar.update(1)
print(full_data.iloc[0]['tagged'])
def convert_filter(dataPath):
df = pd.read_csv(dataPath, sep='\t', header=None)
df.columns=['labels', 'text']
# df.to_json(dataPath[:-3]+'json', orient='records')
df.to_json(dataPath[:-3]+'json')
convert_filter('./origin_data/train_filtered.txt')
def filter_long(path):
df = pd.read_csv(path, header=None, sep='\t')
temp = []
for _, row in df.iterrows():
token = row.iloc[-1].split()
if len(token)<480:
temp.append(row)
else:
print(len(token))
print(len(temp))
print(len(df))
filter_long('./origin_data/test.txt')
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
def convert_label(path):
df = pd.read_csv(path, header=None, sep='\t')
if not hasattr(le, 'classes_'):
le.fit(df.iloc[:, 0])
df.iloc[:, 0] = le.transform(df.iloc[:, 0])
df.to_csv(path, header=False, index=False, sep='\t')
convert_label('./origin_data/train_filtered.txt')
print(hasattr(le, 'classes_'))
convert_label('./origin_data/test_filtered.txt')
from transformers import BertTokenizer
pretrain_model = "bert-base-uncased"
additional_special_tokens = ['<e1>', '</e1>', '<e2>', '</e2>']
tokenizer = BertTokenizer.from_pretrained(pretrain_model, do_lower_case=True, additional_special_tokens = additional_special_tokens)
def tokenization(tokenizer, row):
'''
Tokenize the sentences from filter data
'''
sentence = '[CLS] '+row.iloc[-1]+' [SEP]'
token = tokenizer.tokenize(sentence)
return len(token)
def bert_token_filter(path):
original_data = pd.read_csv(path, sep='\t', header=None)
temp_row = []
for _, row in tqdm(original_data.iterrows()):
token_len = tokenization(tokenizer, row)
if token_len>512:
print(token_len)
else:
row.iloc[-1] = '[CLS] '+row.iloc[-1]+' [SEP]'
temp_row.append(row)
out_df = pd.DataFrame(temp_row)
out_df.to_csv(path[:-4]+'_bf.txt', header=False, index=False, sep='\t')
data_path = [
'./origin_data/train_filtered.txt',
'./origin_data/test_filtered.txt'
]
for i in data_path:
bert_token_filter(i)
original_data
###Output
_____no_output_____ |
credit_default/notebooks/modelling/cat_boost.ipynb | ###Markdown
Modelling Pipeline- import the data- replace null values- separate categorical and numerical- convert target to binary|- run pca on numerical- one hot encoding on categorical- train test separation Imports
###Code
%load_ext autoreload
%autoreload 2
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_fscore_support
from catboost import CatBoostRegressor, CatBoostClassifier
import pickle
import numpy as np
from etl.null_value_replacer import NullValueReplacer
import math
###Output
_____no_output_____
###Markdown
Load Data
###Code
train_data = pd.read_csv("../data/loan-default-prediction/train_v2.csv")
null_value_replacer = NullValueReplacer("median")
train_data = null_value_replacer.fit_transform(train_data)
df_data_types = train_data.dtypes
cat_var = [key for key in dict(df_data_types)
if dict(df_data_types)[key] in ['object']]
train_data.drop(columns=cat_var, inplace=True)
def get_columns_with_distinct_values(df, column_subset):
groups = []
redundant_columns = []
for i in range(len(column_subset)):
col1 = column_subset[i]
if col1 in redundant_columns:
continue
same_columns = [col1]
for j in range(i, len(column_subset)):
col2 = column_subset[j]
if col1 == col2:
continue
if (df[col1]-df[col2]).sum() == 0:
same_columns += [col2]
redundant_columns += [col2]
groups+=[same_columns]
return [i[0] for i in groups]
columns_to_use = get_columns_with_distinct_values(train_data, train_data.columns.values)
len(columns_to_use)
def resample_and_split(df, ratio=0.7):
lossless_data = df[df["loss"]==0]
lossless_data_indices = np.random.permutation(lossless_data.index.values)
lossless_data_split_index = math.floor(len(lossless_data_indices)*ratio)
loss_data = df[df["loss"] >0 ]
loss_data_indices = np.random.permutation(loss_data.index.values)
loss_data_split_index = math.floor(len(loss_data_indices)*ratio)
test_data = pd.concat(
[
lossless_data.loc[lossless_data_indices[lossless_data_split_index:]],
loss_data.loc[loss_data_indices[loss_data_split_index:]]
]
).sample(frac=1).reset_index(drop=True)
loss_train_data = loss_data.loc[loss_data_indices[:loss_data_split_index]]
train_data = []
NUMBER_OF_TRAIN_PARTITIONS = 7
for i in range(0, NUMBER_OF_TRAIN_PARTITIONS):
start_index = i * math.floor(lossless_data_split_index/NUMBER_OF_TRAIN_PARTITIONS)
end_index = (i + 1) * math.floor(lossless_data_split_index/NUMBER_OF_TRAIN_PARTITIONS)
train_data += [
pd.concat(
[
lossless_data.loc[lossless_data_indices[start_index: end_index]],
loss_train_data
]
).sample(frac=1).reset_index(drop=True)
]
return train_data, test_data
train_all, test_all = resample_and_split(train_data[columns_to_use])
###Output
_____no_output_____
###Markdown
Catboost Classifier
###Code
def train_stack_of_classifiers(list_of_df):
classifiers = []
for df in list_of_df:
X = df.drop(columns=["id", "loss"])
y = df["loss"].astype("bool").astype("int")
cat_boost_classifier = CatBoostClassifier(iterations=100, cat_features=["f776", "f777", "f725", "f2", "f5", "f73", "f403"])
cat_boost_classifier.fit(
X,
y=y.values.reshape(-1),
plot=False
)
classifiers += [cat_boost_classifier]
return classifiers
trained_classifiers = train_stack_of_classifiers(train_all)
X_test = test_all.drop(columns=["id", "loss"])
y_test = test_all["loss"].astype("bool").astype("int")
cat_boost_predictions = [i.predict(X_test) for i in trained_classifiers]
joined_prob = pd.DataFrame(data=cat_boost_predictions).agg(sum)/len(cat_boost_predictions)
np.around(joined_prob)
pre_recall_catboost= precision_recall_fscore_support(y_test, np.around(joined_prob))
pre_recall_catboost
###Output
_____no_output_____ |
notebooks/05.2-EncConv.ipynb | ###Markdown
Load Our Experiments- Lg Feedforward (2019-06-03) - (3000,2000,500,70)- Sm Feedforward (2019-05-24) - (3000,2000,500,15)- Convolutional
###Code
# lg_ff = _DateExperimentLoader('2019-06-25')
lg_ff = _DateExperimentLoader('2019-06-03')
# sm_ff = _DateExperimentLoader('2019-05-24')
lg_ff.load()
lg_xent = lg_ff.assemblies[0]
lg_both = lg_ff.assemblies[1]
lg_recon = lg_ff.assemblies[2]
lg_xent
from brainscore.assemblies import split_assembly
from sklearn.linear_model import LinearRegression,Ridge
alphas = tuple(np.logspace(-2,2,num=10))
est = RidgeCV(alphas=alphas,store_cv_values=True)
tr,te = split_assembly(med_data.sel(region='IT'))
est.fit(tr.values,y=tr['tz'])
print(est.alpha_)
est.cv_values_.mean(axis=0)
sns.kdeplot(med_data.ty*8,med_data.tz*8)
def SUCorrelation(da,neuroid_coord,correlation_vars,exclude_zeros=True):
if exclude_zeros:
nz_neuroids = da.groupby(neuroid_coord).sum('presentation').values!=0
da = da[:,nz_neuroids]
correlations = np.empty((len(da[neuroid_coord]),len(correlation_vars)))
for i,nid in tqdm(enumerate(da[neuroid_coord].values),total=len(da[neuroid_coord])):
for j,prop in enumerate(correlation_vars):
n_act = da.sel(**{neuroid_coord:nid}).squeeze()
r,p = pearsonr(n_act,prop)
correlations[i,j] = np.abs(r)
neuroid_dim = da[neuroid_coord].dims
c = {coord: (dims, values) for coord, dims, values in walk_coords(da) if dims == neuroid_dim}
c['task']=('task',[v.name for v in correlation_vars])
# print(neuroid_dim)
result = Score(correlations,
coords=c,
dims=('neuroid','task'))
return result
def result_to_df(SUC,corr_var_labels):
df = SUC.neuroid.to_dataframe().reset_index()
for label in corr_var_labels:
df[label]=SUC.sel(task=label).values
return df
class MURegressor(object):
def __init__(self,da,train_frac=0.8,n_splits=5,n_units=None,estimator=Ridge):
if n_units is not None:
self.neuroid_idxs = [np.array([random.randrange(len(da.neuroid_id)) for _ in range(n_units)]) for _ in range(n_splits)]
self.original_data = da
self.train_frac = train_frac
self.n_splits = n_splits
splits = [split_assembly(self.original_data[:,n_idxs]) for n_idxs in tqdm(self.neuroid_idxs,total=n_splits,desc='CV-splitting')]
self.train = [tr for tr,te in splits]
self.test = [te for tr,te in splits]
self.estimators = [estimator() for _ in range(n_splits)]
def fit(self,y_coord):
# Get Training data
for mod,train in tqdm(zip(self.estimators,self.train),total=len(self.train),desc='fitting'):
# print(train)
mod.fit(X=train.values,y=train[y_coord])
return self
def predict(self,X=None):
if X is not None:
return [e.predict(X) for e in self.estimators]
else:
return [e.predict(te.values) for e,te in zip(self.estimators,self.test)]
def score(self,y_coord):
return [e.score(te.values,te[y_coord].values) for e,te in zip(self.estimators,self.test)]
def stratified_regressors(data, filt='region',n_units=126,y_coords=['ty','tz'],task_names=None,estimator=Ridge):
subsets = np.unique(data[filt].values)
if task_names is None:
task_names = y_coords
dfs = []
for y,task in zip(y_coords,task_names):
print('regressing {}...'.format(y))
regressors = {k:MURegressor(data.sel(**{filt:k}),n_units=n_units,estimator=Ridge).fit(y_coord=y) for k in subsets}
df = pd.DataFrame.from_records({k:v.score(y_coord=y) for k,v in regressors.items()})
df = df.melt(var_name='region',value_name='performance')
df['task']=task
dfs.append(df)
return pd.concat(dfs)
hi_df = stratified_regressors(hi_data,y_coords=['ty','tz','rxy'],n_units=100,
# task_names=['tx','ty','rxy'],
estimator=RidgeCV)
med_df = stratified_regressors(med_data, y_coords=['ty','tz','rxy'],n_units=100,
# task_names=['tx','ty','rxy'],
estimator=RidgeCV)
sns.barplot(x='task',y='performance',hue='region',hue_order=['V4','IT'],data=med_df)
sns.barplot(x='task',y='performance',hue='region',hue_order=['V4','IT'],data=hi_df)
lg_both_top = lg_both[:,lg_both.layer.isin([2,3,4])]
both_df = stratified_regressors(lg_both,filt='layer',y_coords=['tx','ty','rxy'],n_units=50)
# lg_xent_top = lg_xent[:,lg_xent.layer.isin([2,3,4])]
xent_df = stratified_regressors(lg_xent,filt='layer',y_coords=['tx','ty','rxy'],n_units=50)
both_df.head()
sns.boxplot(x='task',y='performance',hue='region',data=both_df)
sns.boxplot(x='task',y='performance',hue='region',data=xent_df)
both_regressors
med_v4_MUR.score(y_coord='ty')
[(tr.shape,te.shape) for tr,te in med_MUR_dicarlo.splits]
[n for n in med_MUR_dicarlo.neuroid_idxs]
properties = ['tx','ty',
# 'rxy',
]
corr_vars_both = [pd.Series(lg_both[v].values,name=v) for v in ['tx','ty']]
corr_both = SUCorrelation(lg_both,neuroid_coord='neuroid_id',correlation_vars=corr_vars_both)
corr_vars_xent = [pd.Series(lg_xent[v].values,name=v) for v in ['tx','ty']]
corr_xent = SUCorrelation(lg_xent,neuroid_coord='neuroid_id',correlation_vars=corr_vars_xent)
corr_vars_recon = [pd.Series(lg_recon[v].values,name=v) for v in properties]
corr_recon = SUCorrelation(lg_recon,neuroid_coord='neuroid_id',correlation_vars=corr_vars_recon)
dicarlo_hi_corr_vars = [
pd.Series(hi_data['ty'],name='tx'),
pd.Series(hi_data['tz'],name='ty'),
pd.Series(hi_data['rxy'],name='rxy'),
]
corr_dicarlo_hi = SUCorrelation(hi_data,neuroid_coord='neuroid_id',correlation_vars=dicarlo_hi_corr_vars,exclude_zeros=True)
dicarlo_med_corr_vars = [
pd.Series(med_data['ty'],name='tx'),
pd.Series(med_data['tz'],name='ty'),
pd.Series(med_data['rxy'],name='rxy'),
]
corr_dicarlo_med = SUCorrelation(med_data,neuroid_coord='neuroid_id',correlation_vars=dicarlo_med_corr_vars,exclude_zeros=True)
# dicarlo_lo_corr_vars = [
# pd.Series(lo_data['ty'],name='tx'),
# pd.Series(lo_data['tz'],name='ty'),
# ]
# corr_dicarlo_lo = SUCorrelation(lo_data,neuroid_coord='neuroid_id',correlation_vars=dicarlo_lo_corr_vars,exclude_zeros=True)
dicarlo_med_df = result_to_df(corr_dicarlo_med,['tx','ty','rxy'])
dicarlo_med_df['variation']=3
dicarlo_hi_df = result_to_df(corr_dicarlo_hi,['tx','ty','rxy'])
dicarlo_hi_df['variation']=6
# dicarlo_lo_df = result_to_df(corr_dicarlo_lo,['tx','ty'])
# dicarlo_lo_df['variation']=0
# dicarlo_lo_df['norm_ty'] = dicarlo_lo_df['ty']
# dicarlo_df = pd.concat([dicarlo_hi_df,dicarlo_med_df])
# dicarlo_df['norm_ty'] = dicarlo_df['ty']/2
# dicarlo_df = pd.concat([dicarlo_df,dicarlo_lo_df])
both_df = result_to_df(corr_both,['tx','ty'])
both_df['norm_ty'] = both_df.ty
xent_df = result_to_df(corr_xent,['tx','ty'])
xent_df['norm_ty'] = xent_df.ty
recon_df = result_to_df(corr_recon,['tx','ty'])
recon_df['norm_ty'] = recon_df.ty
def plot_kde(x,y,df,by='region',order=None):
if order is not None:
subsets = order
else:
subsets = df[by].drop_duplicates().values
plot_scale = 5
fig,axs = plt.subplots(1,len(subsets),figsize=(plot_scale*len(subsets),plot_scale),sharex=True,sharey=True,
subplot_kw={
'xlim':(0.0,0.8),
'ylim':(0.0,0.8)
})
for ax,sub in zip(axs,subsets):
sub_df = df.query('{} == "{}"'.format(by,sub))
sns.kdeplot(sub_df[x],sub_df[y],ax=ax)
ax.set_title("{}: {}".format(by,sub))
# med_data
def plot_bars(y,df,by='region',order=None):
if order is not None:
subsets = order
else:
subsets = df[by].drop_duplicates().values
plot_scale = 5
fig,axs = plt.subplots(1,len(subsets),figsize=(plot_scale*len(subsets),plot_scale),sharex=True,sharey=True,
subplot_kw={
'xlim':(0.0,0.8),
'ylim':(0.0,0.8)
})
for ax,sub in zip(axs,subsets):
subsets = df[by].drop_duplicates().values
sub_df = df.query('{} == "{}"'.format(by,sub))
sns.barplot(x=by,y=y,ax=ax)
# plot_bars(y='tx',df=both_df,by='layer',order=np.arange(5))
sns.barplot(x='layer',y='ty',data=xent_df)
plot_kde('tx','ty',both_df,by='layer',order=np.arange(5))
plot_kde('tx','ty',xent_df,by='layer',order=np.arange(5))
plot_kde('tx','norm_ty',recon_df,by='layer',order=np.arange(5))
sns.set_context('talk')
plot_kde('tx','ty',dicarlo_df.query('variation == 6'),by='region',order=['V4','IT'])
plot_kde('tx','ty',dicarlo_df.query('variation == 3'),by='region',order=['V4','IT'])
# g = corr.groupby('region')
# corr_res = corr.reindex(task=corr.task,neuroid=corr.neuroid_id)
corr= corr.name='both'
corr.reset_coords()
# g.groups
# for l,grp in g:
# res_grp = grp.dropna('neuroid')
# res_grp.name=label
# res_grp = res_grp.reindex(task=res_grp.task,neuroid=res_
# print(res_grp)
# res_grp.to_dataframe(name='label').head()
g = corr.dropna(dim='neuroid').reset_index(corr.dims).groupby('region')
for label,group in g:
agg_dfs.append(group.reset_index(group.dims).to_dataframe(name='label'))
corr_dicarlo
lg.groupby('neuroid_id').groups
from scipy.stats import pearsonr,pearson3
class XArraySUCorrelation(object):
def __init__(self,assembly,stimulus_coords='tx',neuroid_coord='neuroid_id',func=pearsonr):
self.stimulus_coord = stimulus_coord
self.func = func
pearsonr()
# compact_data = data.multi_groupby(['category_name', 'object_name', 'image_id'])
# compact_data = compact_data.mean(dim='presentation')
# compact_data = compact_data.squeeze('time_bin') # (3)
# compact_data = compact_data.T # (4)
# stimulus_set['y_pix'] = scaler.fit_transform(stimulus_set.ty.values.reshape(-1,1))
# stimulus_set['z_pix'] = scaler.fit_transform(stimulus_set.tz.values.reshape(-1,1))
stimulus_set.head()
tx = stimulus_set.query('variation == 6')
tx[['ty','tz','x','y','x_px','y_px']].describe()
sns.kdeplot(tx.ty,tx.tz,shade=True)
sns.scatterplot(v4_resp.x,v4_resp.y)
from matplotlib import image
def resp_dist(dat, presentation = None):
fig, axs = plt.subplots(1,2,figsize=(10,5))
if presentation is None:
presentation = random.randrange(dat.values.shape[1])
d = dat[:,presentation]
cat_name, obj_name, image_id, tz, ty = d.presentation.values.tolist()
image_path = stimulus_set.get_image(image_id)
props = stimulus_set.query('image_id == "{}"'.format(image_id))
g = sns.distplot(d.values,norm_hist=True,ax=axs[1])
img = image.imread(image_path)
axs[0].imshow(img)
axs[0].set_title('{} tz:{} yz:{}'.format(obj_name, tz*8,ty*8))
axs[0].scatter(props.x_px.values+128,props.y_px.values+128)
print(props['image_file_name'].values)
print(props[['ty','tz']])
print(props[['x','y','x_px','y_px']])
return g,props
g,props = resp_dist(v4_resp)
props
x = neural_data.sel(variation=6) # (1)
x = x.multi_groupby(['category_name', 'object_name', 'image_id','repetition','ty','tz']) # (2)
x = x.mean(dim='presentation')
x = x.squeeze('time_bin')
def xr_to_df(x):
ty = x.tz.values
tx = x.ty.values
xdf = pd.DataFrame(x.values.T,columns=x.neuroid_id.values)
xdf['class'] = x.object_name.values
xdf['dy']=ty
xdf['dx']=tx
return xdf
v4_resp.object_name.values
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MultiLabelBinarizer,LabelBinarizer
clf = LinearSVC(C=1,max_iter=10000,verbose=1)
cross_val_score(clf,v4_resp.values.T,v4_resp.category_name.values,verbose=1,cv=5,n_jobs=5)
v4_resp
clf = LinearSVC(C=1,max_iter=10000,verbose=1)
cross_val_score(clf,IT_resp.values.T,IT_resp.category_name.values,verbose=1,cv=5,n_jobs=5)
labels = v4_resp.object_name.values
labeler
for lab in np.unique(labels):
LabelBinarizer().transform()
classifier = SVC(C=10)
# cross_val_score(classifier,v4_resp.values.T,v4_resp.object_name.values,cv=5,verbose=True)
MultiLabelBinarizer()
classifier.predict()
v4 = x.sel(region='V4')
v4_df = xr_to_df(v4)
it = x.sel(region='IT')
it_df = xr_to_df(it)
ds = xarray.open_dataset('/home/elijahc/projects/vae/models/2019-06-03/xent_15_recon_25/label_corruption_0.0/dataset.nc')
da = ds['Only Recon']
da.coords.
v4_x_sel = dicarlo_r(v4.values.T,prop=v4_df.dx)
v4_y_sel = dicarlo_r(v4.values.T,prop=v4_df.dy)
it_x_sel = dicarlo_r(it.values.T,prop=it_df.dx)
it_y_sel = dicarlo_r(it.values.T,prop=it_df.dy)
# v4_class_sel = dprime(v4_df,num_units=len(v4_resp.neuroid_id),col='class',mask_missing=False)
v4_results = pd.DataFrame({
'dx':v4_x_sel,
'dy':v4_y_sel
})
metric = CrossRegressedCorrelation(regression=pls_regression(),correlation=pearsonr_correlation())
v4_score = metric(v4,v4)
v4_r
v4_r.
v4_df.head()
# resp_dist(v4_resp,random_n=False)
v4_resp
image_path = stimulus_set.get_image(stimulus_set['image_id'][0])
print(image_path)
###Output
_____no_output_____ |
L18 Sequence Models and Language/L18_4_Language_Model_Basics.ipynb | ###Markdown
The Time Machine (loaded)
###Code
import sys
sys.path.insert(0, '..')
import collections
import re
from google.colab import files
import io
from PIL import Image
import matplotlib.image as mpimg
from pathlib import Path
# you can donwload images from (http://d2l-data.s3-accelerate.amazonaws.com/timemachine.txt)
my_file = Path("./timemachine.txt")
if not my_file.is_file():
data_to_load = files.upload()
with open('timemachine.txt', 'r') as f:
lines = f.readlines()
raw_dataset = [re.sub('[^A-Za-z]+', ' ', st).lower().split() for st in lines]
# Let's read the first 10 lines of the text
for st in raw_dataset[8:10]:
print('# tokens:', len(st), st)
###Output
_____no_output_____
###Markdown
Word Counts
###Code
counter = collections.Counter([tk for st in raw_dataset for tk in st])
print("frequency of 'traveller':", counter['traveller'])
# Print the 10 most frequent words with word frequency count
print(counter.most_common(10))
###Output
frequency of 'traveller': 61
[('the', 2261), ('i', 1267), ('and', 1245), ('of', 1155), ('a', 816), ('to', 695), ('was', 552), ('in', 541), ('that', 443), ('my', 440)]
###Markdown
Frequency Statistics
###Code
%matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
display.set_matplotlib_formats('svg')
wordcounts = [count for _,count in counter.most_common()]
plt.loglog(wordcounts);
###Output
_____no_output_____
###Markdown
Zipf's Law$$n(x) \propto (x + c)^{-\alpha} \text{ and hence }\log n(x) = -\alpha \log (x+c) + \mathrm{const.}$$Does it work for word pairs, too?
###Code
wseq = [tk for st in raw_dataset for tk in st]
word_pairs = [pair for pair in zip(wseq[:-1], wseq[1:])]
print('Beginning of the book\n', word_pairs[:10])
counter_pairs = collections.Counter(word_pairs)
print('Most common word pairs\n', counter_pairs.most_common(10))
###Output
Beginning of the book
[('the', 'time'), ('time', 'machine'), ('machine', 'by'), ('by', 'h'), ('h', 'g'), ('g', 'wells'), ('wells', 'i'), ('i', 'the'), ('the', 'time'), ('time', 'traveller')]
Most common word pairs
[(('of', 'the'), 309), (('in', 'the'), 169), (('i', 'had'), 130), (('i', 'was'), 112), (('and', 'the'), 109), (('the', 'time'), 102), (('it', 'was'), 99), (('to', 'the'), 85), (('as', 'i'), 78), (('of', 'a'), 73)]
###Markdown
Frequency Statistics
###Code
word_triples = [triple for triple in zip(wseq[:-2], wseq[1:-1], wseq[2:])]
counter_triples = collections.Counter(word_triples)
bigramcounts = [count for _,count in counter_pairs.most_common()]
triplecounts = [count for _,count in counter_triples.most_common()]
plt.loglog(wordcounts, label='word counts');
plt.loglog(bigramcounts, label='bigram counts');
plt.loglog(triplecounts, label='triple counts');
plt.legend();
###Output
_____no_output_____ |
FullDFImplementation.ipynb | ###Markdown
You can very easily add the labels onto your DF if you already have them joined in...
###Code
hasher = HashingTF(inputCol = 'features',outputCol='rawFeatures')
data = hasher.transform(data)
data.show()
len(data.where(data.did==0).select('features').rdd.map(lambda x: x[0]).collect()[0])
###Output
_____no_output_____ |
.ipynb_checkpoints/name-checkpoint.ipynb | ###Markdown
Given birthday finds how many days, months, and years old you are:
###Code
# Written by Katelyn Kunzmann
import datetime
def calculateAge(year, month, day):
# Using now() to get current time
current_date = datetime.datetime.now()
birth_date = datetime.datetime(year, month, day)
# Calculating difference from current time and birth date
diff = current_date - birth_date
# Obtaining the number of seconds
total_seconds = diff.total_seconds()
return total_seconds
# Written by Michael Mascilli
def printInfo(seconds, name):
year = int(seconds / 31536000)
seconds = seconds % 31536000
months = int(seconds / 2592000)
seconds = seconds % 2592000
day = int(seconds / 86400)
seconds = seconds % 86400
hours = int(seconds / 3600)
seconds = seconds % 3600
minutes = int(seconds / 60)
seconds = int(seconds % 60)
print(name.title() + " you are " + str(year) + " years, " + str(months) + " months, " + str(day) + " days, " + str(hours) + " hours, " + str(minutes) + " minutes, " + str(seconds) + " seconds old.")
# Written by Taha Ahmad (and files organized)
name = input("Please enter your name:")
birth_date = input("Enter your birthday in the following format 'mm/dd/yyyy':")
dmy = birth_date.split("/");
sec = calculateAge(int(dmy[2]), int(dmy[0]), int(dmy[1]))
printInfo(sec, name)
###Output
Please enter your name: taha
Enter your birthday in the following format 'mm/dd/yyyy': 10/21/1997
|
src/jenga/notebooks/basic-OpenML-example.ipynb | ###Markdown
Some Helper Functions
###Code
num_repetitions = 10
def print_result(results, metric):
print(f"""
Score ({metric}) on
clean data: {results[0].baseline_score}
corrupted data: {np.mean(results[0].corrupted_scores)}
"""
)
###Output
_____no_output_____
###Markdown
Binary Classification
###Code
binary_task = OpenMLBinaryClassificationTask(1471)
###Output
_____no_output_____
###Markdown
The baseline model is internally fitted on the tasks train data.
###Code
binary_task_model = binary_task.fit_baseline_model()
print(f"Baseline ROC/AUC score: {binary_task.get_baseline_performance()}")
###Output
Baseline ROC/AUC score: 0.6780965688306888
###Markdown
Insert some corruptions and measure their impact.
###Code
binary_task_evaluator = CorruptionImpactEvaluator(binary_task)
binary_task_corruption = MissingValues(column='V3', fraction=0.5, na_value=np.nan)
binary_task_results = binary_task_evaluator.evaluate(binary_task_model, num_repetitions, binary_task_corruption)
print_result(binary_task_results, "ROC/AUC")
###Output
Score (ROC/AUC) on
clean data: 0.6780965688306888
corrupted data: 0.5438625243872014
###Markdown
Mutli-Class Classification
###Code
multi_class_task = OpenMLMultiClassClassificationTask(26)
###Output
_____no_output_____
###Markdown
The baseline model is internally fitted on the tasks train data.
###Code
multi_class_task_model = multi_class_task.fit_baseline_model()
print(f"Baseline F1 score: {multi_class_task.get_baseline_performance()}")
###Output
/usr/local/Caskroom/miniconda/base/envs/jenga/lib/python3.7/site-packages/sklearn/model_selection/_split.py:667: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5.
% (min_groups, self.n_splits)), UserWarning)
###Markdown
Insert some corruptions and measure their impact.
###Code
multi_class_task_evaluator = CorruptionImpactEvaluator(multi_class_task)
multi_class_task_corruption = MissingValues(column='parents', fraction=0.4, na_value=np.nan)
multi_class_task_results = multi_class_task_evaluator.evaluate(multi_class_task_model, num_repetitions, multi_class_task_corruption)
print_result(multi_class_task_results, "F1")
###Output
Score (F1) on
clean data: 0.7057940908591204
corrupted data: 0.6428736678274072
###Markdown
Regression
###Code
regression = OpenMLRegressionTask(42545)
###Output
_____no_output_____
###Markdown
The baseline model is internally fitted on the tasks train data.
###Code
regression_model = regression.fit_baseline_model()
print(f"Baseline MSE score: {regression.get_baseline_performance()}")
###Output
Baseline MSE score: 792.8979896308545
###Markdown
Insert some corruptions and measure their impact.
###Code
regression_evaluator = CorruptionImpactEvaluator(regression)
regression_corruption = MissingValues(column='Material', fraction=0.3, na_value=np.nan)
regression_results = regression_evaluator.evaluate(regression_model, num_repetitions, regression_corruption)
print_result(regression_results, "MSE")
###Output
Score (MSE) on
clean data: 792.8979896308545
corrupted data: 4425.886506130142
|
aws_step_fxns_sqs/tf_playbook.ipynb | ###Markdown
Terraform PlaybookThe purpose of this notebook is to use terraform related to the contents found within this directory. It assumes the installation of terraform, clients, profiles, and other related items are already available.
###Code
import os
import webbrowser
from IPython.core.display import HTML
from IPython.display import Image
!terraform init
!terraform validate
!terraform plan -var-file=vars.tfvars
!terraform graph -type=plan | dot -Tsvg > plan.svg
# view plan graph
Image(url="plan.svg")
!terraform apply -var-file=vars.tfvars -auto-approve
!terraform graph | dot -Tsvg > apply.svg
# view apply graph
Image(url="apply.svg")
# open a web browser for console access
webbrowser.open_new_tab("https://us-west-1.console.aws.amazon.com/states/")
!terraform destroy -auto-approve
###Output
_____no_output_____ |
notebooks/explore/pointpats/distance_statistics.ipynb | ###Markdown
Distance Based Statistical Method for Planar Point Patterns**Authors: Serge Rey and Wei Kang ** IntroductionDistance based methods for point patterns are of three types:* [Mean Nearest Neighbor Distance Statistics](Mean-Nearest-Neighbor-Distance-Statistics)* [Nearest Neighbor Distance Functions](Nearest-Neighbor-Distance-Functions)* [Interevent Distance Functions](Interevent-Distance-Functions)In addition, we are going to introduce a computational technique [Simulation Envelopes](Simulation-Envelopes) to aid in making inferences about the data generating process. An [example](CSR-Example) is used to demonstrate how to use and interprete simulation envelopes.
###Code
import scipy.spatial
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern, PoissonPointProcess, as_window, G, F, J, K, L, Genv, Fenv, Jenv, Kenv, Lenv
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Mean Nearest Neighbor Distance StatisticsThe nearest neighbor(s) for a point $u$ is the point(s) $N(u)$ which meet the condition$$d_{u,N(u)} \leq d_{u,j} \forall j \in S - u$$The distance between the nearest neighbor(s) $N(u)$ and the point $u$ is nearest neighbor distance for $u$. After searching for nearest neighbor(s) for all the points and calculating the corresponding distances, we are able to calculate mean nearest neighbor distance by averaging these distances.It was demonstrated by Clark and Evans(1954) that mean nearest neighbor distance statistics distribution is a normal distribution under null hypothesis (underlying spatial process is CSR). We can utilize the test statistics to determine whether the point pattern is the outcome of CSR. If not, is it the outcome of cluster or regularspatial process?Mean nearest neighbor distance statistic$$\bar{d}_{min}=\frac{1}{n} \sum_{i=1}^n d_{min}(s_i)$$
###Code
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
pp = PointPattern(points)
pp.summary()
###Output
Point Pattern
12 points
Bounding rectangle [(8.23,7.68), (98.73,92.08)]
Area of window: 7638.200000000002
Intensity estimate for window: 0.0015710507711240865
x y
0 66.22 32.54
1 22.52 22.39
2 31.01 81.21
3 9.47 31.02
4 30.78 60.10
###Markdown
We may call the method **knn** in PointPattern class to find $k$ nearest neighbors for each point in the point pattern *pp*.
###Code
# one nearest neighbor (default)
pp.knn()
###Output
_____no_output_____
###Markdown
The first array is the ids of the most nearest neighbor for each point, the second array is the distance between each point and its most nearest neighbor.
###Code
# two nearest neighbors
pp.knn(2)
pp.max_nnd # Maximum nearest neighbor distance
pp.min_nnd # Minimum nearest neighbor distance
pp.mean_nnd # mean nearest neighbor distance
pp.nnd # Nearest neighbor distances
pp.nnd.sum()/pp.n # same as pp.mean_nnd
pp.plot()
###Output
_____no_output_____
###Markdown
Nearest Neighbor Distance FunctionsNearest neighbour distance distribution functions (including the nearest “event-to-event” and “point-event” distance distribution functions) of a point process are cumulative distribution functions of several kinds -- $G, F, J$. By comparing the distance function of the observed point pattern with that of the point pattern from a CSR process, we are able to infer whether the underlying spatial process of the observed point pattern is CSR or not for a given confidence level. $G$ function - event-to-eventThe $G$ function is defined as follows: for a given distance $d$, $G(d)$ is the proportion of nearest neighbor distances that are less than $d$.$$G(d) = \sum_{i=1}^n \frac{ \phi_i^d}{n}$$$$ \phi_i^d = \begin{cases} 1 & \quad \text{if } d_{min}(s_i)<d \\ 0 & \quad \text{otherwise } \\ \end{cases}$$If the underlying point process is a CSR process, $G$ function has an expectation of:$$G(d) = 1-e(-\lambda \pi d^2)$$However, if the $G$ function plot is above the expectation this reflects clustering, while departures below expectation reflect dispersion.
###Code
gp1 = G(pp, intervals=20)
gp1.plot()
###Output
_____no_output_____
###Markdown
A slightly different visualization of the empirical function is the quantile-quantile plot:
###Code
gp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
in the q-q plot the csr function is now a diagonal line which serves to make accessment of departures from csr visually easier. It is obvious that the above $G$ increases very slowly at small distances and the line is below the expected value for a CSR process (green line). We might think that the underlying spatial process is regular point process. However, this visual inspection is not enough for a final conclusion. In [Simulation Envelopes](Simulation-Envelopes), we are going to demonstrate how to simulate data under CSR many times and construct the $95\%$ simulation envelope for $G$.
###Code
gp1.d # distance domain sequence (corresponding to the x-axis)
gp1.G #cumulative nearest neighbor distance distribution over d (corresponding to the y-axis))
###Output
_____no_output_____
###Markdown
$F$ function - "point-event" When the number of events in a point pattern is small, $G$ function is rough (see the $G$ function plot for the 12 size point pattern above). One way to get around this is to turn to $F$ funtion where a given number of randomly distributed points are generated in the domain and the nearest event neighbor distance is calculated for each point. The cumulative distribution of all nearest event neighbor distances is called $F$ function.
###Code
fp1 = F(pp, intervals=20) # The default is to randomly generate 100 points.
fp1.plot()
fp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
We can increase the number of intervals to make $F$ more smooth.
###Code
fp1 = F(pp, intervals=50)
fp1.plot()
fp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
$F$ function is more smooth than $G$ function. $J$ function - a combination of "event-event" and "point-event"$J$ function is defined as follows:$$J(d) = \frac{1-G(d)}{1-F(d)}$$If $J(d)<1$, the underlying point process is a cluster point process; if $J(d)=1$, the underlying point process is a random point process; otherwise, it is a regular point process.
###Code
jp1 = J(pp, intervals=20)
jp1.plot()
###Output
_____no_output_____
###Markdown
From the above figure, we can observe that $J$ function is obviously above the $J(d)=1$ horizontal line. It is approaching infinity with nearest neighbor distance increasing. We might tend to conclude that the underlying point process is a regular one. Interevent Distance FunctionsNearest neighbor distance functions consider only the nearest neighbor distances, "event-event", "point-event" or the combination. Thus, distances to higer order neighbors are ignored, which might reveal important information regarding the point process. Interevent distance functions, including $K$ and $L$ functions, are proposed to consider distances between all pairs of event points. Similar to $G$, $F$ and $J$ functions, $K$ and $L$ functions are also cumulative distribution function. $K$ function - "interevent"Given distance $d$, $K(d)$ is defined as:$$K(d) = \frac{\sum_{i=1}^n \sum_{j=1}^n \psi_{ij}(d)}{n \hat{\lambda}}$$where$$ \psi_{ij}(d) = \begin{cases} 1 & \quad \text{if } d_{ij}<d \\ 0 & \quad \text{otherwise } \\ \end{cases}$$$\sum_{j=1}^n \psi_{ij}(d)$ is the number of events within a circle of radius $d$ centered on event $s_i$ .Still, we use CSR as the benchmark (null hypothesis) and see how the $K$ funtion estimated from the observed point pattern deviate from that under CSR, which is $K(d)=\pi d^2$. $K(d)\pi d^2$ indicates that the underlying point process is a cluster point process.
###Code
kp1 = K(pp)
kp1.plot()
###Output
_____no_output_____
###Markdown
$L$ function - "interevent"$L$ function is a scaled version of $K$ function, defined as:$$L(d) = \sqrt{\frac{K(d)}{\pi}}-d$$
###Code
lp1 = L(pp)
lp1.plot()
###Output
_____no_output_____
###Markdown
Simulation EnvelopesA [Simulation envelope](http://www.esajournals.org/doi/pdf/10.1890/13-2042.1) is a computer intensive technique for inferring whether an observed pattern significantly deviates from what would be expected under a specific process. Here, we always use CSR as the benchmark. In order to construct a simulation envelope for a given function, we need to simulate CSR a lot of times, say $1000$ times. Then, we can calculate the function for each simulated point pattern. For every distance $d$, we sort the function values of the $1000$ simulated point patterns. Given a confidence level, say $95\%$, we can acquire the $25$th and $975$th value for every distance $d$. Thus, a simulation envelope is constructed. Simulation Envelope for G function**Genv** class in pysal.
###Code
realizations = PoissonPointProcess(pp.window, pp.n, 100, asPP=True) # simulate CSR 100 times
genv = Genv(pp, intervals=20, realizations=realizations) # call Genv to generate simulation envelope
genv
genv.observed
genv.plot()
###Output
_____no_output_____
###Markdown
In the above figure, **LB** and **UB** comprise the simulation envelope. **CSR** is the mean function calculated from the simulated data. **G** is the function estimated from the observed point pattern. It is well below the simulation envelope. We can infer that the underlying point process is a regular one. Simulation Envelope for F function**Fenv** class in pysal.
###Code
fenv = Fenv(pp, intervals=20, realizations=realizations)
fenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for J function**Jenv** class in pysal.
###Code
jenv = Jenv(pp, intervals=20, realizations=realizations)
jenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for K function**Kenv** class in pysal.
###Code
kenv = Kenv(pp, intervals=20, realizations=realizations)
kenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for L function**Lenv** class in pysal.
###Code
lenv = Lenv(pp, intervals=20, realizations=realizations)
lenv.plot()
###Output
_____no_output_____
###Markdown
CSR ExampleIn this example, we are going to generate a point pattern as the "observed" point pattern. The data generating process is CSR. Then, we will simulate CSR in the same domain for 100 times and construct a simulation envelope for each function.
###Code
from pysal.lib.cg import shapely_ext
from pysal.explore.pointpats import Window
import pysal.lib as ps
va = ps.io.open(ps.examples.get_path("vautm17n.shp"))
polys = [shp for shp in va]
state = shapely_ext.cascaded_union(polys)
###Output
_____no_output_____
###Markdown
Generate the point pattern **pp** (size 100) from CSR as the "observed" point pattern.
###Code
a = [[1],[1,2]]
np.asarray(a)
n = 100
samples = 1
pp = PoissonPointProcess(Window(state.parts), n, samples, asPP=True)
pp.realizations[0]
pp.n
###Output
_____no_output_____
###Markdown
Simulate CSR in the same domian for 100 times which would be used for constructing simulation envelope under the null hypothesis of CSR.
###Code
csrs = PoissonPointProcess(pp.window, 100, 100, asPP=True)
csrs
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $G$ function.
###Code
genv = Genv(pp.realizations[0], realizations=csrs)
genv.plot()
###Output
_____no_output_____
###Markdown
Since the "observed" $G$ is well contained by the simulation envelope, we infer that the underlying point process is a random process.
###Code
genv.low # lower bound of the simulation envelope for G
genv.high # higher bound of the simulation envelope for G
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $F$ function.
###Code
fenv = Fenv(pp.realizations[0], realizations=csrs)
fenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $J$ function.
###Code
jenv = Jenv(pp.realizations[0], realizations=csrs)
jenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $K$ function.
###Code
kenv = Kenv(pp.realizations[0], realizations=csrs)
kenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $L$ function.
###Code
lenv = Lenv(pp.realizations[0], realizations=csrs)
lenv.plot()
###Output
_____no_output_____
###Markdown
Distance Based Statistical Method for Planar Point Patterns**Authors: Serge Rey and Wei Kang ** IntroductionDistance based methods for point patterns are of three types:* [Mean Nearest Neighbor Distance Statistics](Mean-Nearest-Neighbor-Distance-Statistics)* [Nearest Neighbor Distance Functions](Nearest-Neighbor-Distance-Functions)* [Interevent Distance Functions](Interevent-Distance-Functions)In addition, we are going to introduce a computational technique [Simulation Envelopes](Simulation-Envelopes) to aid in making inferences about the data generating process. An [example](CSR-Example) is used to demonstrate how to use and interpret simulation envelopes.
###Code
import scipy.spatial
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern, PoissonPointProcess, as_window, G, F, J, K, L, Genv, Fenv, Jenv, Kenv, Lenv
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Mean Nearest Neighbor Distance StatisticsThe nearest neighbor(s) for a point $u$ is the point(s) $N(u)$ which meet the condition$$d_{u,N(u)} \leq d_{u,j} \forall j \in S - u$$The distance between the nearest neighbor(s) $N(u)$ and the point $u$ is nearest neighbor distance for $u$. After searching for nearest neighbor(s) for all the points and calculating the corresponding distances, we are able to calculate mean nearest neighbor distance by averaging these distances.It was demonstrated by Clark and Evans(1954) that mean nearest neighbor distance statistics distribution is a normal distribution under null hypothesis (underlying spatial process is CSR). We can utilize the test statistics to determine whether the point pattern is the outcome of CSR. If not, is it the outcome of cluster or regularspatial process?Mean nearest neighbor distance statistic$$\bar{d}_{min}=\frac{1}{n} \sum_{i=1}^n d_{min}(s_i)$$
###Code
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
pp = PointPattern(points)
pp.summary()
###Output
Point Pattern
12 points
Bounding rectangle [(8.23,7.68), (98.73,92.08)]
Area of window: 7638.200000000002
Intensity estimate for window: 0.0015710507711240865
x y
0 66.22 32.54
1 22.52 22.39
2 31.01 81.21
3 9.47 31.02
4 30.78 60.10
###Markdown
We may call the method **knn** in PointPattern class to find $k$ nearest neighbors for each point in the point pattern *pp*.
###Code
# one nearest neighbor (default)
pp.knn()
###Output
_____no_output_____
###Markdown
The first array is the ids of the most nearest neighbor for each point, the second array is the distance between each point and its most nearest neighbor.
###Code
# two nearest neighbors
pp.knn(2)
pp.max_nnd # Maximum nearest neighbor distance
pp.min_nnd # Minimum nearest neighbor distance
pp.mean_nnd # mean nearest neighbor distance
pp.nnd # Nearest neighbor distances
pp.nnd.sum()/pp.n # same as pp.mean_nnd
pp.plot()
###Output
_____no_output_____
###Markdown
Nearest Neighbor Distance FunctionsNearest neighbour distance distribution functions (including the nearest “event-to-event” and “point-event” distance distribution functions) of a point process are cumulative distribution functions of several kinds -- $G, F, J$. By comparing the distance function of the observed point pattern with that of the point pattern from a CSR process, we are able to infer whether the underlying spatial process of the observed point pattern is CSR or not for a given confidence level. $G$ function - event-to-eventThe $G$ function is defined as follows: for a given distance $d$, $G(d)$ is the proportion of nearest neighbor distances that are less than $d$.$$G(d) = \sum_{i=1}^n \frac{ \phi_i^d}{n}$$$$ \phi_i^d = \begin{cases} 1 & \quad \text{if } d_{min}(s_i)<d \\ 0 & \quad \text{otherwise } \\ \end{cases}$$If the underlying point process is a CSR process, $G$ function has an expectation of:$$G(d) = 1-e(-\lambda \pi d^2)$$However, if the $G$ function plot is above the expectation this reflects clustering, while departures below expectation reflect dispersion.
###Code
gp1 = G(pp, intervals=20)
gp1.plot()
###Output
_____no_output_____
###Markdown
A slightly different visualization of the empirical function is the quantile-quantile plot:
###Code
gp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
in the q-q plot the csr function is now a diagonal line which serves to make accessment of departures from csr visually easier. It is obvious that the above $G$ increases very slowly at small distances and the line is below the expected value for a CSR process (green line). We might think that the underlying spatial process is regular point process. However, this visual inspection is not enough for a final conclusion. In [Simulation Envelopes](Simulation-Envelopes), we are going to demonstrate how to simulate data under CSR many times and construct the $95\%$ simulation envelope for $G$.
###Code
gp1.d # distance domain sequence (corresponding to the x-axis)
gp1.G #cumulative nearest neighbor distance distribution over d (corresponding to the y-axis))
###Output
_____no_output_____
###Markdown
$F$ function - "point-event" When the number of events in a point pattern is small, $G$ function is rough (see the $G$ function plot for the 12 size point pattern above). One way to get around this is to turn to $F$ function where a given number of randomly distributed points are generated in the domain and the nearest event neighbor distance is calculated for each point. The cumulative distribution of all nearest event neighbor distances is called $F$ function.
###Code
fp1 = F(pp, intervals=20) # The default is to randomly generate 100 points.
fp1.plot()
fp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
We can increase the number of intervals to make $F$ more smooth.
###Code
fp1 = F(pp, intervals=50)
fp1.plot()
fp1.plot(qq=True)
###Output
_____no_output_____
###Markdown
$F$ function is more smooth than $G$ function. $J$ function - a combination of "event-event" and "point-event"$J$ function is defined as follows:$$J(d) = \frac{1-G(d)}{1-F(d)}$$If $J(d)<1$, the underlying point process is a cluster point process; if $J(d)=1$, the underlying point process is a random point process; otherwise, it is a regular point process.
###Code
jp1 = J(pp, intervals=20)
jp1.plot()
###Output
_____no_output_____
###Markdown
From the above figure, we can observe that $J$ function is obviously above the $J(d)=1$ horizontal line. It is approaching infinity with nearest neighbor distance increasing. We might tend to conclude that the underlying point process is a regular one. Interevent Distance FunctionsNearest neighbor distance functions consider only the nearest neighbor distances, "event-event", "point-event" or the combination. Thus, distances to higher order neighbors are ignored, which might reveal important information regarding the point process. Interevent distance functions, including $K$ and $L$ functions, are proposed to consider distances between all pairs of event points. Similar to $G$, $F$ and $J$ functions, $K$ and $L$ functions are also cumulative distribution function. $K$ function - "interevent"Given distance $d$, $K(d)$ is defined as:$$K(d) = \frac{\sum_{i=1}^n \sum_{j=1}^n \psi_{ij}(d)}{n \hat{\lambda}}$$where$$ \psi_{ij}(d) = \begin{cases} 1 & \quad \text{if } d_{ij}<d \\ 0 & \quad \text{otherwise } \\ \end{cases}$$$\sum_{j=1}^n \psi_{ij}(d)$ is the number of events within a circle of radius $d$ centered on event $s_i$ .Still, we use CSR as the benchmark (null hypothesis) and see how the $K$ function estimated from the observed point pattern deviate from that under CSR, which is $K(d)=\pi d^2$. $K(d)\pi d^2$ indicates that the underlying point process is a cluster point process.
###Code
kp1 = K(pp)
kp1.plot()
###Output
_____no_output_____
###Markdown
$L$ function - "interevent"$L$ function is a scaled version of $K$ function, defined as:$$L(d) = \sqrt{\frac{K(d)}{\pi}}-d$$
###Code
lp1 = L(pp)
lp1.plot()
###Output
_____no_output_____
###Markdown
Simulation EnvelopesA [Simulation envelope](http://www.esajournals.org/doi/pdf/10.1890/13-2042.1) is a computer intensive technique for inferring whether an observed pattern significantly deviates from what would be expected under a specific process. Here, we always use CSR as the benchmark. In order to construct a simulation envelope for a given function, we need to simulate CSR a lot of times, say $1000$ times. Then, we can calculate the function for each simulated point pattern. For every distance $d$, we sort the function values of the $1000$ simulated point patterns. Given a confidence level, say $95\%$, we can acquire the $25$th and $975$th value for every distance $d$. Thus, a simulation envelope is constructed. Simulation Envelope for G function**Genv** class in pysal.
###Code
realizations = PoissonPointProcess(pp.window, pp.n, 100, asPP=True) # simulate CSR 100 times
genv = Genv(pp, intervals=20, realizations=realizations) # call Genv to generate simulation envelope
genv
genv.observed
genv.plot()
###Output
_____no_output_____
###Markdown
In the above figure, **LB** and **UB** comprise the simulation envelope. **CSR** is the mean function calculated from the simulated data. **G** is the function estimated from the observed point pattern. It is well below the simulation envelope. We can infer that the underlying point process is a regular one. Simulation Envelope for F function**Fenv** class in pysal.
###Code
fenv = Fenv(pp, intervals=20, realizations=realizations)
fenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for J function**Jenv** class in pysal.
###Code
jenv = Jenv(pp, intervals=20, realizations=realizations)
jenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for K function**Kenv** class in pysal.
###Code
kenv = Kenv(pp, intervals=20, realizations=realizations)
kenv.plot()
###Output
_____no_output_____
###Markdown
Simulation Envelope for L function**Lenv** class in pysal.
###Code
lenv = Lenv(pp, intervals=20, realizations=realizations)
lenv.plot()
###Output
_____no_output_____
###Markdown
CSR ExampleIn this example, we are going to generate a point pattern as the "observed" point pattern. The data generating process is CSR. Then, we will simulate CSR in the same domain for 100 times and construct a simulation envelope for each function.
###Code
from pysal.lib.cg import shapely_ext
from pysal.explore.pointpats import Window
import pysal.lib as ps
va = ps.io.open(ps.examples.get_path("vautm17n.shp"))
polys = [shp for shp in va]
state = shapely_ext.cascaded_union(polys)
###Output
_____no_output_____
###Markdown
Generate the point pattern **pp** (size 100) from CSR as the "observed" point pattern.
###Code
a = [[1],[1,2]]
np.asarray(a)
n = 100
samples = 1
pp = PoissonPointProcess(Window(state.parts), n, samples, asPP=True)
pp.realizations[0]
pp.n
###Output
_____no_output_____
###Markdown
Simulate CSR in the same domain for 100 times which would be used for constructing simulation envelope under the null hypothesis of CSR.
###Code
csrs = PoissonPointProcess(pp.window, 100, 100, asPP=True)
csrs
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $G$ function.
###Code
genv = Genv(pp.realizations[0], realizations=csrs)
genv.plot()
###Output
_____no_output_____
###Markdown
Since the "observed" $G$ is well contained by the simulation envelope, we infer that the underlying point process is a random process.
###Code
genv.low # lower bound of the simulation envelope for G
genv.high # higher bound of the simulation envelope for G
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $F$ function.
###Code
fenv = Fenv(pp.realizations[0], realizations=csrs)
fenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $J$ function.
###Code
jenv = Jenv(pp.realizations[0], realizations=csrs)
jenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $K$ function.
###Code
kenv = Kenv(pp.realizations[0], realizations=csrs)
kenv.plot()
###Output
_____no_output_____
###Markdown
Construct the simulation envelope for $L$ function.
###Code
lenv = Lenv(pp.realizations[0], realizations=csrs)
lenv.plot()
###Output
_____no_output_____ |
CCI/Ch1_Arrays_and_Strings/Q3_URLify.ipynb | ###Markdown
Q1.3 URLifyReplace all spaces in string s with '%20'Input: string sReturn: modified string s
###Code
# 1. python replace
def URLify1(s):
"""
space O(n) create a new string
time O(n)
"""
return s.replace(' ', '%20')
# 2. inplace
def URLify2(s):
"""
assume s is a char array, pad spaces, and traverse from last to first, use two pointers to fill
- one pointer for real string
- one pointer for padded string
space O(1) here we use char array to simulate a mutable string
time O(n)
"""
char_arr = [c for c in s]
n = len(s)
# count spaces
spaces = 0
for c in char_arr:
if c == ' ':
spaces += 1
# pad required spaces to array
space_to_pad = spaces*2
for i in range(space_to_pad):
char_arr.append(' ')
# back traverse the array to fill '%20' inplace
i = n-1 # true last idx
j = len(char_arr)-1 # padded last idx
while i < j and i >= 0:
if char_arr[i] != ' ':
# fill idx j with char_arr[i], both move 1 space left
char_arr[j] = char_arr[i]
j -= 1
i -= 1
else:
char_arr[(j-2):(j+1)] = ['%','2','0']
j -= 3
i -= 1
return ''.join(char_arr)
# Testcases
def URLify(s1):
return URLify2(s1)
s1 = ""
s2 = "a d e"
s22 = "a%20d%20e"
s3 = "abd"
s33 = "abd"
print(URLify(s1) == s1)
print(URLify(s2) == s22)
print(URLify(s3) == s33)
###Output
True
True
True
|
detector/yolov5/tutorial.ipynb | ###Markdown
This notebook was written by Ultralytics LLC, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). For more information please visit https://github.com/ultralytics/yolov5 and https://www.ultralytics.com. SetupClone repo, install dependencies and check PyTorch and GPU.
###Code
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
###Output
Setup complete. Using torch 1.7.0+cu101 _CudaDeviceProperties(name='Tesla V100-SXM2-16GB', major=7, minor=0, total_memory=16160MB, multi_processor_count=80)
###Markdown
1. Inference`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
###Code
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600)
###Output
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt'])
YOLOv5 v4.0-21-gb26a2f6 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16130.5MB)
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS
image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 buss, 1 skateboards, Done. (0.011s)
image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.011s)
Results saved to runs/detect/exp
Done. (0.110s)
###Markdown
Results are saved to `runs/detect`. A full list of available inference sources: 2. TestTest a model on [COCO](https://cocodataset.org/home) val or test-dev dataset to evaluate trained accuracy. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be 1-2% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation. COCO val2017Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL14) dataset (1GB - 5000 images), and test model accuracy.
###Code
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Run YOLOv5x on COCO val2017
!python test.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65
###Output
Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', exist_ok=False, img_size=640, iou_thres=0.65, name='exp', project='runs/test', save_conf=False, save_hybrid=False, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])
YOLOv5 v4.0-75-gbdd88e1 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Downloading https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5x.pt to yolov5x.pt...
100% 168M/168M [00:04<00:00, 39.7MB/s]
Fusing layers...
Model Summary: 476 layers, 87730285 parameters, 0 gradients, 218.8 GFLOPS
[34m[1mval: [0mScanning '../coco/val2017' for images and labels... 4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:01<00:00, 2824.78it/s]
[34m[1mval: [0mNew cache created: ../coco/val2017.cache
Class Images Targets P R [email protected] [email protected]:.95: 100% 157/157 [01:33<00:00, 1.68it/s]
all 5e+03 3.63e+04 0.749 0.619 0.68 0.486
Speed: 5.2/2.0/7.3 ms inference/NMS/total per 640x640 image at batch-size 32
Evaluating pycocotools mAP... saving runs/test/exp/yolov5x_predictions.json...
loading annotations into memory...
Done (t=0.44s)
creating index...
index created!
Loading and preparing results...
DONE (t=4.47s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=94.87s).
Accumulating evaluation results...
DONE (t=15.96s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.687
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.544
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.338
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.548
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.637
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.378
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.628
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.680
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.520
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.729
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.826
Results saved to runs/test/exp
###Markdown
COCO test-dev2017Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images). Results are saved to a `*.json` file which can be submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
###Code
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
%mv ./test2017 ./coco/images && mv ./coco ../ # move images to /coco and move /coco next to /yolov5
# Run YOLOv5s on COCO test-dev2017 using --task test
!python test.py --weights yolov5s.pt --data coco.yaml --task test
###Output
_____no_output_____
###Markdown
3. TrainDownload [COCO128](https://www.kaggle.com/ultralytics/coco128), a small 128-image tutorial dataset, start tensorboard and train YOLOv5s from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around **300-1000 epochs**, depending on your dataset).
###Code
# Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
###Output
_____no_output_____
###Markdown
Train a YOLOv5s model on [COCO128](https://www.kaggle.com/ultralytics/coco128) with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and **COCO, COCO128, and VOC datasets are downloaded automatically** on first use.All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.
###Code
# Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
!wandb login # use 'wandb disabled' or 'wandb enabled' to disable or enable
# Train YOLOv5s on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --nosave --cache
###Output
[34m[1mgithub: [0mup to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 v4.0-75-gbdd88e1 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)
Namespace(adam=False, batch_size=16, bucket='', cache_images=True, cfg='', data='./data/coco128.yaml', device='', epochs=3, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], linear_lr=False, local_rank=-1, log_artifacts=False, log_imgs=16, multi_scale=False, name='exp', noautoanchor=False, nosave=True, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs/train/exp', single_cls=False, sync_bn=False, total_batch_size=16, weights='yolov5s.pt', workers=8, world_size=1)
[34m[1mwandb: [0mInstall Weights & Biases for YOLOv5 logging with 'pip install wandb' (recommended)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
2021-02-12 06:38:28.027271: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
[34m[1mhyperparameters: [0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0
Downloading https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:01<00:00, 13.2MB/s]
from n params module arguments
0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 layers, 7276605 parameters, 7276605 gradients, 17.1 GFLOPS
Transferred 362/362 items from yolov5s.pt
Scaled weight_decay = 0.0005
Optimizer groups: 62 .bias, 62 conv.weight, 59 other
[34m[1mtrain: [0mScanning '../coco128/labels/train2017' for images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<00:00, 2566.00it/s]
[34m[1mtrain: [0mNew cache created: ../coco128/labels/train2017.cache
[34m[1mtrain: [0mCaching images (0.1GB): 100% 128/128 [00:00<00:00, 175.07it/s]
[34m[1mval: [0mScanning '../coco128/labels/train2017.cache' for images and labels... 128 found, 0 missing, 2 empty, 0 corrupted: 100% 128/128 [00:00<00:00, 764773.38it/s]
[34m[1mval: [0mCaching images (0.1GB): 100% 128/128 [00:00<00:00, 128.17it/s]
Plotting labels...
[34m[1mautoanchor: [0mAnalyzing anchors... anchors/target = 4.26, Best Possible Recall (BPR) = 0.9946
Image sizes 640 train, 640 test
Using 2 dataloader workers
Logging results to runs/train/exp
Starting training for 3 epochs...
Epoch gpu_mem box obj cls total targets img_size
0/2 3.27G 0.04357 0.06781 0.01869 0.1301 207 640: 100% 8/8 [00:03<00:00, 2.03it/s]
Class Images Targets P R [email protected] [email protected]:.95: 100% 4/4 [00:04<00:00, 1.14s/it]
all 128 929 0.646 0.627 0.659 0.431
Epoch gpu_mem box obj cls total targets img_size
1/2 7.75G 0.04308 0.06654 0.02083 0.1304 227 640: 100% 8/8 [00:01<00:00, 4.11it/s]
Class Images Targets P R [email protected] [email protected]:.95: 100% 4/4 [00:01<00:00, 2.94it/s]
all 128 929 0.681 0.607 0.663 0.434
Epoch gpu_mem box obj cls total targets img_size
2/2 7.75G 0.04461 0.06896 0.01866 0.1322 191 640: 100% 8/8 [00:02<00:00, 3.94it/s]
Class Images Targets P R [email protected] [email protected]:.95: 100% 4/4 [00:03<00:00, 1.22it/s]
all 128 929 0.642 0.632 0.662 0.432
Optimizer stripped from runs/train/exp/weights/last.pt, 14.8MB
3 epochs completed in 0.007 hours.
###Markdown
4. Visualize Weights & Biases Logging 🌟 NEW[Weights & Biases](https://www.wandb.com/) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use). During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289). Local LoggingAll results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and test jpgs to see mosaics, labels, predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934).
###Code
Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels
Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels
Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions
###Output
_____no_output_____
###Markdown
> `train_batch0.jpg` shows train batch 0 mosaics and labels> `test_batch0_labels.jpg` shows test batch 0 labels> `test_batch0_pred.jpg` shows test batch 0 _predictions_ Training losses and performance metrics are also logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and a custom `results.txt` logfile which is plotted as `results.png` (below) after training completes. Here we show YOLOv5s trained on COCO128 to 300 epochs, starting from scratch (blue), and from pretrained `--weights yolov5s.pt` (orange).
###Code
from utils.plots import plot_results
plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png
Image(filename='runs/train/exp/results.png', width=800)
###Output
_____no_output_____
###Markdown
EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):- **Google Colab and Kaggle** notebooks with free GPU: - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) StatusIf this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([test.py](https://github.com/ultralytics/yolov5/blob/master/test.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/models/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. AppendixOptional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
###Code
# Re-clone repo
%cd ..
%rm -rf yolov5 && git clone https://github.com/ultralytics/yolov5
%cd yolov5
# Reproduce
%%shell
for x in yolov5s yolov5m yolov5l yolov5x; do
python test.py --weights $x.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
python test.py --weights $x.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
done
# Unit tests
%%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
rm -rf runs # remove runs/
for m in yolov5s; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python test.py --weights $m.pt --device $d # test official
python test.py --weights runs/train/exp/weights/best.pt --device $d # test custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # inspect
python models/export.py --weights $m.pt --img 640 --batch 1 # export
done
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
profile(x=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
###Output
_____no_output_____ |
07_Visualization/Chipotle/Exercises_basicPlot_Counter_tean.ipynb | ###Markdown
Visualizing Chipotle's Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# set this so the graphs open internally
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo.
###Code
chipo = pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv',sep = '\t')
###Output
_____no_output_____
###Markdown
Step 4. See the first 10 entries
###Code
chipo.head(10)
chipo['price'] = chipo.item_price.apply(lambda x:float(x[1:]))
chipo.head(2)
###Output
_____no_output_____
###Markdown
Step 5. Create a histogram of the top 5 items bought
###Code
top5 = (chipo.groupby('item_name').sum()['quantity'].sort_values(ascending=False))[:5]
df = pd.DataFrame.from_dict(Counter(chipo.item_name),orient='index',)
df = df.sort_values(0,ascending=False)[:5]
df.plot(kind='bar')
plt.xlabel('Item')
plt.ylabel('Number of Times Ordered')
plt.title('Most ordered Chipo Items')
plt.legend().remove()
plt.show()
(chipo.groupby('item_name').sum()['quantity']).plot.hist()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the number of items orderered per order price Hint: Price should be in the X-axis and Items ordered in the Y-axis
###Code
#x-axis = chipo['item_price']
#y-axis = chipo['item_name']
chipo_order = chipo.groupby('order_id').sum()
chipo_order.plot.scatter(x = 'price', y = 'quantity',c='yellow')
plt.title('Number of items ordered per order price')
###Output
_____no_output_____ |
examples/tutorials/translations/bengali/Part 02 - Intro to Federated Learning.ipynb | ###Markdown
পর্ব 2: ফেডারেটেড লার্নিং এর সাথে পরিচিতিশেষ বিভাগে, আমরা পয়েন্টার টেনসরগুলো সম্পর্কে শিখেছি, যা গোপনীয়তা সংরক্ষণ করে ডিপ লার্নিং এর জন্য আমাদের প্রয়োজনীয় অন্তর্নিহিত অবকাঠামো তৈরি করে। এই বিভাগে আমরা দেখবো, কিভাবে প্রাথমিক সরঞ্জামসমূহ ব্যবহার করে আমাদের প্রথম গোপনীয়তা সংরক্ষণকারী ডিপ লার্নিং এল্গোরিদম - ফেডারেটেড লার্নিং তৈরি করতে পারি।লেখক:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)অনুবাদক:- Sayantan Das - Github: [@ucalyptus](https://github.com/ucalyptus)- Mir Mohammad Jaber - Twitter: [@jabertuhin](https://twitter.com/jabertuhin) ফেডারেটড লার্নিং কী?এটি ডিপ লার্নিং মডেলগুলি প্রশিক্ষণের একটি সহজ, শক্তিশালী উপায়। আপনি যদি প্রশিক্ষণের ডেটা সম্পর্কে চিন্তা করেন, এটি সর্বদা কোনও ধরণের সংগ্রহ প্রক্রিয়ার ফলাফল। লোকেরা (ডিভাইসের মাধ্যমে) বাস্তব বিশ্বে ঘটনা রেকর্ড করে ডেটা উৎপন্ন করে। সাধারণত, এই ডেটাটি একক, কেন্দ্রীয় অবস্থানে সংযুক্ত করা হয় যাতে আপনি একটি মেশিন লার্নিং মডেলকে প্রশিক্ষণ দিতে পারেন। ফেডারেটেড লার্নিং এটিকে নিজের মাথায় নিয়ে নেয়!মডেলটিতে প্রশিক্ষণ ডেটা আনার পরিবর্তে (একটি কেন্দ্রীয় সার্ভার) আপনি মডেলটিকে প্রশিক্ষণের ডেটাতে নিয়ে আসুন (এটি যেখানেই থাকুক না কেন)।ধারণাটি হলো এটি যার দ্বারা ডেটা তৈরি করছে সে কেবলমাত্র স্থায়ী অনুলিপিটির মালিক হতে পারে এবং যার ফলে এটিতে কখন কার অ্যাক্সেস রয়েছে তা নিয়ন্ত্রণ বজায় রাখে। খুব সুন্দর, তাই না? বিভাগ 2.1 - একটি খেলনা ফেডারেটেড লার্নিং-এর উদাহরণআসুন একটি খেলনা মডেলকে কেন্দ্রিক উপায়ে প্রশিক্ষণ দিয়ে শুরু করি। মডেলগুলি পাওয়ার সাথে সাথে এটি একটি সাধারণ। আমাদের প্রথম প্রয়োজন:- একটি খেলনা ডেটাসেট- একটি মডেল- ডেটা ফিট করার জন্য একটি মডেল প্রশিক্ষণের কিছু প্রাথমিক প্রশিক্ষণের যুক্তি।দ্রষ্টব্য: যদি এই API-টি আপনার কাছে অপরিচিত হয় - তবে [fast.ai](http://fast.ai) এ যান এবং এই টিউটোরিয়ালটি চালিয়ে যাওয়ার আগে তাদের কোর্সটি গ্রহণ করুন।
###Code
import torch
from torch import nn
from torch import optim
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# A Toy Model
model = nn.Linear(2,1)
def train():
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
for iter in range(20):
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# 6) print our progress
print(loss.data)
train()
###Output
_____no_output_____
###Markdown
এবং আপনার জন্য এখন প্রস্তুত! আমরা প্রচলিত পদ্ধতিতে একটি বেসিক মডেলকে প্রশিক্ষণ দিয়েছি। আমাদের সমস্ত ডেটা আমাদের স্থানীয় মেশিনে একত্রিত হয় এবং আমরা এটি আমাদের মডেলটিতে আপডেট করতে ব্যবহার করতে পারি। ফেডারেটেড লার্নিং অবশ্য এইভাবে কাজ করে না। সুতরাং, ফেডারেট লার্নিংয়ের উপায়ে এটি করার জন্য এই উদাহরণটি সংশোধন করি!সুতরাং, আমাদের কী দরকার:- কয়েকজন শ্রমিক তৈরি করুন- প্রতিটি কর্মীর উপর প্রশিক্ষণ ডেটার পয়েন্টার পান- ফেডারেট লার্নিংয়ের জন্য প্রশিক্ষণের যুক্তি আপডেট করা নতুন প্রশিক্ষণের ধাপসমূহ: - কর্মী সংশোধন করতে মডেল প্রেরণ করুন - সেখানে অবস্থিত ডেটার উপর প্রশিক্ষণ করুন - মডেলটি ফিরে পাবেন এবং পরবর্তী কর্মীর থেকে এর পুনরাবৃত্তি করুন
###Code
import syft as sy
hook = sy.TorchHook(torch)
# create a couple workers
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# get pointers to training data on each worker by
# sending some training data to bob and alice
data_bob = data[0:2]
target_bob = target[0:2]
data_alice = data[2:]
target_alice = target[2:]
# Iniitalize A Toy Model
model = nn.Linear(2,1)
data_bob = data_bob.send(bob)
data_alice = data_alice.send(alice)
target_bob = target_bob.send(bob)
target_alice = target_alice.send(alice)
# organize pointers into a list
datasets = [(data_bob,target_bob),(data_alice,target_alice)]
opt = optim.SGD(params=model.parameters(),lr=0.1)
def train():
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
for iter in range(10):
# NEW) iterate through each worker's dataset
for data,target in datasets:
# NEW) send model to correct worker
model.send(data.location)
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# NEW) get model (with gradients)
model.get()
# 6) print our progress
print(loss.get()) # NEW) slight edit... need to call .get() on loss\
# federated averaging
train()
###Output
_____no_output_____
###Markdown
সাবাশ!এবং voilà! ফেডারেটেড লার্নিং ব্যবহার করে আমরা এখন একটি খুব সাধারণ ডিপ লার্নিং মডেল প্রশিক্ষণ দিচ্ছি! আমরা প্রতিটি কর্মীকে মডেল প্রেরণ করি, একটি নতুন গ্রেডিয়েন্ট উৎপন্ন করি এবং তারপরে গ্রেডিয়েন্টটি আমাদের স্থানীয় সার্ভারে ফিরিয়ে আনি যেখানে আমরা আমাদের গ্লোবাল মডেলটি আপডেট করি। এই প্রক্রিয়াতে কখনই আমরা অন্তর্নিহিত প্রশিক্ষণের ডেটা দেখতে বা দেখার জন্য অনুরোধ করি না! আমরা বব এবং অ্যালিসের গোপনীয়তা সংরক্ষণ করি !!! এই উদাহরণের ত্রুটিসমূহসুতরাং, যদিও এই উদাহরণটি ফেডারেটড লার্নিংয়ের সাথে পরিচিতিতে দুর্দান্ত, তবে এটির এখনও কিছু বড় ত্রুটি রয়েছে। এর মধ্যে উল্লেখযোগ্য হল, যখন আমরা `Model.get ()' কল করি এবং বব বা অ্যালিসের কাছ থেকে আপডেট হওয়া মডেলটি পাই, তখন আমরা বব এবং অ্যালিসের প্রশিক্ষণের ডেটা সম্পর্কে তাদের গ্রেডিয়েন্টগুলি দেখে আসলে অনেক কিছু জানতে পারি। কিছু ক্ষেত্রে, আমরা তাদের প্রশিক্ষণের ডেটা পুরোপুরি পুনরুদ্ধার করতে পারি!তো, কী করার আছে? ঠিক আছে, প্রথম কৌশল যা লোকেরা প্রয়োগ করা থাকে তা হলো - **কেন্দ্রীয় সার্ভারে আপলোড করার আগে একাধিক ব্যক্তির মধ্যে গ্রেডিয়েন্ট গড় করা**। এই কৌশলটির জন্য পয়েন্টার টেন্সর অবজেক্টের আরও কিছু পরিশীলিত ব্যবহারের প্রয়োজন হবে। সুতরাং, পরবর্তী বিভাগে, আমরা আরও উন্নত পয়েন্টার কার্যকারিতা সম্পর্কে জানতে কিছুটা সময় নেব এবং তারপরে আমরা এই ফেডারেট লার্নিং উদাহরণটি আপগ্রেড করব। অভিনন্দন !!! - সম্প্রদায় যোগদানের সময়!এই নোটবুক টিউটোরিয়ালটি সম্পন্ন করার জন্য অভিনন্দন! আপনি যদি এটি উপভোগ করেন এবং গোপনীয়তা সংরক্ষণ, এআই এবং এআই সরবরাহ চেইনের (ডেটা) বিকেন্দ্রীভূত মালিকানার দিকে আন্দোলনে যোগ দিতে চান, আপনি নিম্নলিখিত উপায়ে এটি করতে পারেন! গিটহাবে PySyft-কে স্টার দিনআমাদের সম্প্রদায়কে সাহায্য করার সবচেয়ে সহজ উপায় হল রিপোজিটরিতে স্টার দেয়া! এটি আমরা যে দারুন সরঞ্জামগুলি তৈরি করছি তার সচেতনতা বাড়াতে সহায়তা করে।- [Star PySyft](https://github.com/OpenMined/PySyft) আমাদের স্ল্যাকে যোগ দিন!সর্বশেষতম অগ্রগতিতে নিজেকে আপ টু ডেট রাখার সর্বোত্তম উপায় হলো আমাদের সম্প্রদায়ে যোগদান করা! আপনি [http://slack.openmined.org](http://slack.openmined.org) এ ফর্মটি পূরণ করে এটি করতে পারেন একটি কোড প্রকল্পে যোগদান করুন!আমাদের কমিউনিটিতে অবদান রাখার সেরা উপায় হলো একজন কোড অবদানকারীতে পরিণত হওয়া। যেকোন সময় আপনি PySyft এর গিটহাব ইস্যুর পেজে যেতে পারেন এবং "Projects" দিয়ে বাছাই করবেন। এর মাধ্যমে আপনি যে সকল প্রজেক্টে যোগদান করতে পারবেন সেগুলোর উপরের দিকের টিকেটের ওভারভিউ পাবেন। আপনি যদি কোন প্রজেক্টে জয়েন করতে ইচ্ছুক না হোন, কিন্তু কিছুটা কোডিং করতে ইচ্ছুক সেক্ষেত্রে আপনি "one off" মিনি প্রজেক্টগুলো দেখতে পারেন গিটহাব ইস্যুতে "good first issue" চিহ্নিত ইস্যুগুলো।- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) দান করুনআপনার যদি আমাদের কোডবেজে অবদান রাখারা সময় না হয়, কিন্তু এরপরও আমাদেরকে সমর্থন দিতে চান তাহলে আমাদের উন্মুক্ত সংগ্রহের সমর্থক হতে পারেন। সকল ধরনের দানের অর্থ আমাদের ওয়েব হোস্টিং এবং অন্যান্য কমিউনিটি কার্যক্রমে খরচ হয় যেমন - হ্যাকাথন, মিটাপ।[OpenMined's Open Collective Page](https://opencollective.com/openmined)
###Code
###Output
_____no_output_____
###Markdown
পর্ব 2: সংঘবদ্ধ শিক্ষার পরিচয়শেষ বিভাগে, আমরা পয়েন্টারটেন্সারগুলি সম্পর্কে শিখেছি, যা ডিপ লার্নিং সংরক্ষণের গোপনীয়তার জন্য আমাদের প্রয়োজনীয় অন্তর্নিহিত অবকাঠামো তৈরি করে। এই বিভাগে, আমরা কীভাবে গভীর শিক্ষার অ্যালগরিদম, ফেডারেটেড লার্নিং সংরক্ষণের জন্য আমাদের প্রথম গোপনীয়তা প্রয়োগ করতে এই বেসিক সরঞ্জামগুলি ব্যবহার করব তা দেখতে যাচ্ছি।লেখক:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)অনুবাদক:- Sayantan Das - Github: [@ucalyptus](https://github.com/ucalyptus) ফেডারেটড লার্নিং কী?এটি ডিপ লার্নিং মডেলগুলি প্রশিক্ষণের একটি সহজ, শক্তিশালী উপায়। আপনি যদি প্রশিক্ষণের ডেটা সম্পর্কে চিন্তা করেন, এটি সর্বদা কোনও ধরণের সংগ্রহ প্রক্রিয়ার ফলাফল। লোকেরা (ডিভাইসের মাধ্যমে) বাস্তব বিশ্বে ইভেন্টগুলি রেকর্ড করে ডেটা উত্পন্ন করে। সাধারণত, এই ডেটাটি একক, কেন্দ্রীয় অবস্থানে সংযুক্ত করা হয় যাতে আপনি একটি মেশিন লার্নিং মডেলকে প্রশিক্ষণ দিতে পারেন। ফেডারেটেড লার্নিং এটিকে নিজের মাথায় ঘুরিয়ে দেয়!মডেলটিতে প্রশিক্ষণ ডেটা আনার পরিবর্তে (একটি কেন্দ্রীয় সার্ভার) আপনি মডেলটিকে প্রশিক্ষণের ডেটাতে নিয়ে আসুন (এটি যেখানেই থাকুক না কেন)।ধারণাটি হ'ল এটি যার দ্বারা ডেটা তৈরি করছে সে কেবলমাত্র স্থায়ী অনুলিপিটির মালিক হতে পারে এবং যার ফলে এটিতে কখন অ্যাক্সেস রয়েছে তা নিয়ন্ত্রণ বজায় রাখে। খুব সুন্দর, তাই না? বিভাগ 2.1 - একটি খেলনা সংযুক্ত শিক্ষার উদাহরণআসুন একটি খেলনা মডেলকে কেন্দ্রিক উপায়ে প্রশিক্ষণ দিয়ে শুরু করি। মডেলগুলি পাওয়ার সাথে সাথে এটি একটি সাধারণ। আমাদের প্রথম প্রয়োজন:- একটি খেলনা ডেটাসেট- একজন মডেল- ডেটা ফিট করার জন্য একটি মডেল প্রশিক্ষণের জন্য কিছু প্রাথমিক প্রশিক্ষণের যুক্তি।দ্রষ্টব্য: যদি এই এপিআইটি আপনার কাছে অপরিচিত হয় - তবে [ফাস্ট.ai](http://fast.ai) এ যান এবং এই টিউটোরিয়ালটি চালিয়ে যাওয়ার আগে তাদের কোর্সটি গ্রহণ করুন।
###Code
import torch
from torch import nn
from torch import optim
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# A Toy Model
model = nn.Linear(2,1)
def train():
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
for iter in range(20):
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# 6) print our progress
print(loss.data)
train()
###Output
_____no_output_____
###Markdown
এবং সেখানে আপনি এটা আছে! আমরা প্রচলিত পদ্ধতিতে একটি বেসিক মডেলকে প্রশিক্ষণ দিয়েছি। আমাদের সমস্ত ডেটা আমাদের স্থানীয় মেশিনে একত্রিত হয় এবং আমরা এটি আমাদের মডেলটিতে আপডেট করতে ব্যবহার করতে পারি। ফেডারেটেড লার্নিং অবশ্য এইভাবে কাজ করে না। সুতরাং, ফেডারেট লার্নিংয়ের উপায়ে এটি করার জন্য এই উদাহরণটি সংশোধন করি!সুতরাং, আমাদের কী দরকার:- কয়েকজন শ্রমিক তৈরি করুন- প্রতিটি কর্মীর উপর প্রশিক্ষণের ডেটা নির্দেশক পান- সংঘবদ্ধ শিক্ষার জন্য প্রশিক্ষণের যুক্তি আপডেট করা নতুন প্রশিক্ষণের পদক্ষেপ: - কর্মী সংশোধন করতে মডেল প্রেরণ করুন - সেখানে অবস্থিত ডেটা উপর প্রশিক্ষণ - মডেলটি ফিরে পাবেন এবং পরবর্তী কর্মীদের সাথে পুনরাবৃত্তি করুন
###Code
import syft as sy
hook = sy.TorchHook(torch)
# create a couple workers
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True)
target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True)
# get pointers to training data on each worker by
# sending some training data to bob and alice
data_bob = data[0:2]
target_bob = target[0:2]
data_alice = data[2:]
target_alice = target[2:]
# Iniitalize A Toy Model
model = nn.Linear(2,1)
data_bob = data_bob.send(bob)
data_alice = data_alice.send(alice)
target_bob = target_bob.send(bob)
target_alice = target_alice.send(alice)
# organize pointers into a list
datasets = [(data_bob,target_bob),(data_alice,target_alice)]
opt = optim.SGD(params=model.parameters(),lr=0.1)
def train():
# Training Logic
opt = optim.SGD(params=model.parameters(),lr=0.1)
for iter in range(10):
# NEW) iterate through each worker's dataset
for data,target in datasets:
# NEW) send model to correct worker
model.send(data.location)
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# NEW) get model (with gradients)
model.get()
# 6) print our progress
print(loss.get()) # NEW) slight edit... need to call .get() on loss\
# federated averaging
train()
###Output
_____no_output_____ |
Reducer/stats_by_group.ipynb | ###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
{'groups': [{'state-code': '01', 'sum': [4779736, 2171853]}, {'state-code': '02', 'sum': [710231, 306967]}, {'state-code': '04', 'sum': [6392017, 2844526]}, {'state-code': '05', 'sum': [2915918, 1316299]}, {'state-code': '06', 'sum': [37253956, 13680081]}, {'state-code': '08', 'sum': [5029196, 2212898]}, {'state-code': '09', 'sum': [3574097, 1487891]}, {'state-code': '10', 'sum': [897934, 405885]}, {'state-code': '11', 'sum': [601723, 296719]}, {'state-code': '12', 'sum': [18801310, 8989580]}, {'state-code': '13', 'sum': [9687653, 4088801]}, {'state-code': '15', 'sum': [1360301, 519508]}, {'state-code': '16', 'sum': [1567582, 667796]}, {'state-code': '17', 'sum': [12830632, 5296715]}, {'state-code': '18', 'sum': [6483802, 2795541]}, {'state-code': '19', 'sum': [3046355, 1336417]}, {'state-code': '20', 'sum': [2853118, 1233215]}, {'state-code': '21', 'sum': [4339367, 1927164]}, {'state-code': '22', 'sum': [4533372, 1964981]}, {'state-code': '23', 'sum': [1328361, 721830]}, {'state-code': '24', 'sum': [5773552, 2378814]}, {'state-code': '25', 'sum': [6547629, 2808254]}, {'state-code': '26', 'sum': [9883640, 4532233]}, {'state-code': '27', 'sum': [5303925, 2347201]}, {'state-code': '28', 'sum': [2967297, 1274719]}, {'state-code': '29', 'sum': [5988927, 2712729]}, {'state-code': '30', 'sum': [989415, 482825]}, {'state-code': '31', 'sum': [1826341, 796793]}, {'state-code': '32', 'sum': [2700551, 1173814]}, {'state-code': '33', 'sum': [1316470, 614754]}, {'state-code': '34', 'sum': [8791894, 3553562]}, {'state-code': '35', 'sum': [2059179, 901388]}, {'state-code': '36', 'sum': [19378102, 8108103]}, {'state-code': '37', 'sum': [9535483, 4327528]}, {'state-code': '38', 'sum': [672591, 317498]}, {'state-code': '39', 'sum': [11536504, 5127508]}, {'state-code': '40', 'sum': [3751351, 1664378]}, {'state-code': '41', 'sum': [3831074, 1675562]}, {'state-code': '42', 'sum': [12702379, 5567315]}, {'state-code': '44', 'sum': [1052567, 463388]}, {'state-code': '45', 'sum': [4625364, 2137683]}, {'state-code': '46', 'sum': [814180, 363438]}, {'state-code': '47', 'sum': [6346105, 2812133]}, {'state-code': '48', 'sum': [25145561, 9977436]}, {'state-code': '49', 'sum': [2763885, 979709]}, {'state-code': '50', 'sum': [625741, 322539]}, {'state-code': '51', 'sum': [8001024, 3364939]}, {'state-code': '53', 'sum': [6724540, 2885677]}, {'state-code': '54', 'sum': [1852994, 881917]}, {'state-code': '55', 'sum': [5686986, 2624358]}, {'state-code': '56', 'sum': [563626, 261868]}]}
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a collection of US census blocks.
blocks = ee.FeatureCollection('TIGER/2010/Blocks')
# Compute sums of the specified properties, grouped by state code.
sums = blocks \
.filter(ee.Filter.And(
ee.Filter.neq('pop10', {}),
ee.Filter.neq('housing10', {}))) \
.reduceColumns(**{
'selectors': ['pop10', 'housing10', 'statefp10'],
'reducer': ee.Reducer.sum().repeat(2).group(**{
'groupField': 2,
'groupName': 'state-code',
})
})
# Print the resultant Dictionary.
print(sums.getInfo())
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
profiling/plot.ipynb | ###Markdown
Imports and setup
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from plotnine import *
plt.style.use('ggplot')
np.set_printoptions(suppress=True)
pd.set_option('display.max_columns', 1000)
pd.set_option('display.max_rows', 400)
# Plotnine theme
FONT = 'Roboto'
FONT_SIZE = 12
TEXT_COLOR = '#767676'
def theme_baseline(angle=0, figsize=(15, 15)):
return (theme_bw(base_size=FONT_SIZE, base_family=FONT)
+ theme(axis_text_x = element_text(angle=angle, color=TEXT_COLOR),
figure_size = figsize,
strip_background = element_blank(),
legend_key = element_blank(),
legend_title = element_text(size=FONT_SIZE, margin={'b': 14}, weight='bold')))
###Output
_____no_output_____
###Markdown
Plot
###Code
df = pd.read_csv('stats1.csv')
df['min_y'] = df['mean_time'] - df['std_time']
df['max_y'] = df['mean_time'] + df['std_time']
(ggplot(df)
+ aes(x='size', y='mean_time', color='exp')
+ labs(x='Nós', y='Tempo (s)', color='Tipo')
+ scale_x_continuous(trans = 'log10')
+ scale_y_continuous(trans = 'log10')
# + geom_errorbar(aes(ymin='min_y', ymax='max_y'))
+ geom_line()
+ geom_point()
+ theme_bw()
+ theme(figure_size = (12, 8),
axis_title_x=element_text(margin={'t': 20}),
axis_title_y=element_text(margin={'r': 20}))
)
df = pd.read_csv('stats1.csv')
df['min_y'] = df['mean_memory'] - df['mean_memory']
df['max_y'] = df['mean_memory'] + df['mean_memory']
(ggplot(df)
+ aes(x='size', y='mean_memory', color='exp')
+ labs(x='Nós', y='Memória (MB)', color='Tipo')
+ scale_x_continuous(trans = 'log10')
+ scale_y_continuous(trans = 'log10')
# + geom_errorbar(aes(ymin='min_y', ymax='max_y'))
+ geom_line()
+ geom_point()
+ theme_bw()
+ theme(figure_size = (12, 8),
axis_title_x=element_text(margin={'t': 20}),
axis_title_y=element_text(margin={'r': 20}))
)
df = pd.read_csv('stats2.csv')
df['min_y'] = df['mean_time'] - df['std_time']
df['max_y'] = df['mean_time'] + df['std_time']
(ggplot(df)
+ aes(x='size', y='mean_time', color='exp')
+ labs(x='Quantidade', y='Tempo (s)', color='Tipo')
+ scale_x_continuous(trans = 'log10')
+ scale_y_continuous(trans = 'log10')
# + geom_errorbar(aes(ymin='min_y', ymax='max_y'))
+ geom_line()
+ geom_point()
+ theme_bw()
+ theme(figure_size = (12, 8),
axis_title_x=element_text(margin={'t': 20}),
axis_title_y=element_text(margin={'r': 20}))
)
df = pd.read_csv('stats2.csv')
df['min_y'] = df['mean_memory'] - df['mean_memory']
df['max_y'] = df['mean_memory'] + df['mean_memory']
(ggplot(df)
+ aes(x='size', y='mean_memory', color='exp')
+ labs(x='Quantidade', y='Memória (MB)', color='Tipo')
+ scale_x_continuous(trans = 'log10')
+ scale_y_continuous(trans = 'log10')
# + geom_errorbar(aes(ymin='min_y', ymax='max_y'))
+ geom_line()
+ geom_point()
+ theme_bw()
+ theme(figure_size = (12, 8),
axis_title_x=element_text(margin={'t': 20}),
axis_title_y=element_text(margin={'r': 20}))
)
###Output
_____no_output_____ |
12-reinforcement/HW12_reinforcement_learning.ipynb | ###Markdown
**Homework 12 - Reinforcement Learning**If you have any problem, e-mail us at [email protected] Preliminary workFirst, we need to install all necessary packages.One of them, gym, builded by OpenAI, is a toolkit for developing Reinforcement Learning algorithm. Other packages are for visualization in colab.
###Code
!apt update
!apt install python-opengl xvfb -y
!pip install gym[box2d]==0.18.3 pyvirtualdisplay tqdm numpy==1.19.5 torch==1.8.1
###Output
[33m
0% [Working][0m
Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
[33m
0% [Connecting to archive.ubuntu.com] [1 InRelease 14.2 kB/88.7 kB 16%] [Connec[0m[33m
0% [Connecting to archive.ubuntu.com] [Connected to cloud.r-project.org (52.85.[0m[33m
0% [1 InRelease gpgv 88.7 kB] [Connecting to archive.ubuntu.com] [Connected to [0m
Hit:2 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease
[33m
0% [1 InRelease gpgv 88.7 kB] [Connecting to archive.ubuntu.com (91.189.88.142)[0m
Ign:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease
[33m
0% [1 InRelease gpgv 88.7 kB] [Connecting to archive.ubuntu.com (91.189.88.142)[0m
Hit:4 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease
[33m
0% [1 InRelease gpgv 88.7 kB] [Connecting to archive.ubuntu.com (91.189.88.142)[0m
Ign:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
Hit:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release
Hit:7 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release
Hit:8 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:9 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Get:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Hit:11 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease
Get:12 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Hit:13 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,770 kB]
Fetched 3,022 kB in 3s (1,191 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
79 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-opengl is already the newest version (3.1.0+dfsg-1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.9).
0 upgraded, 0 newly installed, 0 to remove and 79 not upgraded.
Requirement already satisfied: gym[box2d]==0.18.3 in /usr/local/lib/python3.7/dist-packages (0.18.3)
Requirement already satisfied: pyvirtualdisplay in /usr/local/lib/python3.7/dist-packages (2.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (4.62.2)
Requirement already satisfied: numpy==1.19.5 in /usr/local/lib/python3.7/dist-packages (1.19.5)
Requirement already satisfied: torch==1.8.1 in /usr/local/lib/python3.7/dist-packages (1.8.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.8.1) (3.7.4.3)
Requirement already satisfied: Pillow<=8.2.0 in /usr/local/lib/python3.7/dist-packages (from gym[box2d]==0.18.3) (7.1.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym[box2d]==0.18.3) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym[box2d]==0.18.3) (1.3.0)
Requirement already satisfied: pyglet<=1.5.15,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym[box2d]==0.18.3) (1.5.0)
Requirement already satisfied: box2d-py~=2.3.5 in /usr/local/lib/python3.7/dist-packages (from gym[box2d]==0.18.3) (2.3.8)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.15,>=1.4.0->gym[box2d]==0.18.3) (0.16.0)
Requirement already satisfied: EasyProcess in /usr/local/lib/python3.7/dist-packages (from pyvirtualdisplay) (0.3)
###Markdown
Next, set up virtual display,and import all necessaary packages.
###Code
%%capture
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
%matplotlib inline
import matplotlib.pyplot as plt
from IPython import display
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
Warning ! Do not revise random seed !!! Your submission on JudgeBoi will not reproduce your result !!!Make your HW result to be reproducible.
###Code
seed = 543 # Do not change this
def fix(env, seed):
env.seed(seed)
env.action_space.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
torch.set_deterministic(True)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Last, call gym and build an [Lunar Lander](https://gym.openai.com/envs/LunarLander-v2/) environment.
###Code
%%capture
import gym
import random
env = gym.make('LunarLander-v2')
fix(env, seed) # fix the environment Do not revise this !!!
###Output
_____no_output_____
###Markdown
What Lunar Lander?“LunarLander-v2”is to simulate the situation when the craft lands on the surface of the moon.This task is to enable the craft to land "safely" at the pad between the two yellow flags.> Landing pad is always at coordinates (0,0).> Coordinates are the first two numbers in state vector."LunarLander-v2" actually includes "Agent" and "Environment". In this homework, we will utilize the function `step()` to control the action of "Agent". Then `step()` will return the observation/state and reward given by the "Environment". Observation / StateFirst, we can take a look at what an Observation / State looks like.
###Code
print(env.observation_space)
###Output
Box(-inf, inf, (8,), float32)
###Markdown
`Box(8,)`means that observation is an 8-dim vector ActionActions can be taken by looks like
###Code
print(env.action_space)
###Output
Discrete(4)
###Markdown
`Discrete(4)` implies that there are four kinds of actions can be taken by agent.- 0 implies the agent will not take any actions- 2 implies the agent will accelerate downward- 1, 3 implies the agent will accelerate left and rightNext, we will try to make the agent interact with the environment. Before taking any actions, we recommend to call `reset()` function to reset the environment. Also, this function will return the initial state of the environment.
###Code
initial_state = env.reset()
print(initial_state)
###Output
[ 0.00396109 1.4083536 0.40119505 -0.11407257 -0.00458307 -0.09087662
0. 0. ]
###Markdown
Then, we try to get a random action from the agent's action space.
###Code
random_action = env.action_space.sample()
print(random_action)
###Output
0
###Markdown
More, we can utilize `step()` to make agent act according to the randomly-selected `random_action`.The `step()` function will return four values:- observation / state- reward- done (True/ False)- Other information
###Code
observation, reward, done, info = env.step(random_action)
print(done)
###Output
False
###Markdown
Reward> Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points.
###Code
print(reward)
###Output
-0.8588900517154912
###Markdown
Random AgentIn the end, before we start training, we can see whether a random agent can successfully land the moon or not.
###Code
env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
done = False
while not done:
action = env.action_space.sample()
observation, reward, done, _ = env.step(action)
img.set_data(env.render(mode='rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
###Output
_____no_output_____
###Markdown
Policy GradientNow, we can build a simple policy network. The network will return one of action in the action space.
###Code
class PolicyGradientNetwork(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(8, 16)
self.fc2 = nn.Linear(16, 16)
self.fc3 = nn.Linear(16, 4)
def forward(self, state):
hid = torch.tanh(self.fc1(state))
hid = torch.tanh(self.fc2(hid))
return F.softmax(self.fc3(hid), dim=-1)
###Output
_____no_output_____
###Markdown
Then, we need to build a simple agent. The agent will acts according to the output of the policy network above. There are a few things can be done by agent:- `learn()`:update the policy network from log probabilities and rewards.- `sample()`:After receiving observation from the environment, utilize policy network to tell which action to take. The return values of this function includes action and log probabilities.
###Code
from torch.optim.lr_scheduler import StepLR
class PolicyGradientAgent():
def __init__(self, network):
self.network = network
self.optimizer = optim.SGD(self.network.parameters(), lr=0.001)
def forward(self, state):
return self.network(state)
def learn(self, log_probs, rewards):
loss = (-log_probs * rewards).sum() # You don't need to revise this to pass simple baseline (but you can)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def sample(self, state):
action_prob = self.network(torch.FloatTensor(state))
action_dist = Categorical(action_prob)
action = action_dist.sample()
log_prob = action_dist.log_prob(action)
return action.item(), log_prob
class ActorCriticNetwork(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(8, 16)
self.fc2 = nn.Linear(16, 16)
self.fc3 = nn.Linear(16, 1)
def forward(self, state):
hid = torch.tanh(self.fc1(state))
hid = torch.tanh(self.fc2(hid))
return self.fc3(hid)
class ActorCritic():
def __init__(self, network):
self.network = network
self.optimizer = optim.SGD(self.network.parameters(), lr=0.001)
def forward(self, state):
return self.network(torch.FloatTensor(state))
def learn(self, vs_now, vs_next, reward):
loss = vs_next + reward - vs_now
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return loss.item()
###Output
_____no_output_____
###Markdown
Lastly, build a network and agent to start training.
###Code
network = PolicyGradientNetwork()
agent = PolicyGradientAgent(network)
critic_network = ActorCriticNetwork()
critic = ActorCritic(critic_network)
###Output
_____no_output_____
###Markdown
Trainin AgentNow let's start to train our agent.Through taking all the interactions between agent and environment as training data, the policy network can learn from all these attempts,
###Code
agent.network.train() # Switch network into training mode
critic.network.train()
EPISODE_PER_BATCH = 5 # update the agent every 5 episode
NUM_BATCH = 400 # totally update the agent for 400 time
avg_total_rewards, avg_final_rewards = [], []
prg_bar = tqdm(range(NUM_BATCH))
for batch in prg_bar:
log_probs, rewards = [], []
total_rewards, final_rewards = [], []
# collect trajectory
for episode in range(EPISODE_PER_BATCH):
state = env.reset()
total_reward, total_step = 0, 0
seq_rewards = []
while True:
action, log_prob = agent.sample(state) # at, log(at|st)
vs_now = critic.forward(state)
next_state, reward, done, _ = env.step(action)
vs_next = critic.forward(next_state)
TD_error = critic.learn(vs_now, vs_next, reward)
log_probs.append(log_prob) # [log(a1|s1), log(a2|s2), ...., log(at|st)]
seq_rewards.append(reward)
state = next_state
total_reward += reward
total_step += 1
rewards.append(TD_error) # change here
if done:
# gamma = 0.99
# for i in range(len(seq_rewards) - 2, -1, -1):
# seq_rewards[i] = seq_rewards[i] + seq_rewards[i + 1] * gamma
# rewards.extend(seq_rewards)
final_rewards.append(reward)
total_rewards.append(total_reward)
break
print(f"rewards looks like ", np.shape(rewards))
print(f"log_probs looks like ", np.shape(log_probs))
# record training process
avg_total_reward = sum(total_rewards) / len(total_rewards)
avg_final_reward = sum(final_rewards) / len(final_rewards)
avg_total_rewards.append(avg_total_reward)
avg_final_rewards.append(avg_final_reward)
prg_bar.set_description(f"Total: {avg_total_reward: 4.1f}, Final: {avg_final_reward: 4.1f}")
# update agent
# rewards = np.concatenate(rewards, axis=0)
rewards = (rewards - np.mean(rewards)) / (np.std(rewards) + 1e-9) # normalize the reward
agent.learn(torch.stack(log_probs), torch.from_numpy(rewards))
print("logs prob looks like ", torch.stack(log_probs).size())
print("torch.from_numpy(rewards) looks like ", torch.from_numpy(rewards).size())
###Output
_____no_output_____
###Markdown
Training ResultDuring the training process, we recorded `avg_total_reward`, which represents the average total reward of episodes before updating the policy network.Theoretically, if the agent becomes better, the `avg_total_reward` will increase.The visualization of the training process is shown below:
###Code
plt.plot(avg_total_rewards)
plt.title("Total Rewards")
plt.show()
###Output
_____no_output_____
###Markdown
In addition, `avg_final_reward` represents average final rewards of episodes. To be specific, final rewards is the last reward received in one episode, indicating whether the craft lands successfully or not.
###Code
plt.plot(avg_final_rewards)
plt.title("Final Rewards")
plt.show()
###Output
_____no_output_____
###Markdown
TestingThe testing result will be the average reward of 5 testing
###Code
fix(env, seed)
agent.network.eval() # set the network into evaluation mode
NUM_OF_TEST = 5 # Do not revise this !!!
test_total_reward = []
action_list = []
for i in range(NUM_OF_TEST):
actions = []
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
total_reward = 0
done = False
while not done:
action, _ = agent.sample(state)
actions.append(action)
state, reward, done, _ = env.step(action)
total_reward += reward
# img.set_data(env.render(mode='rgb_array'))
# display.display(plt.gcf())
# display.clear_output(wait=True)
print(total_reward)
test_total_reward.append(total_reward)
action_list.append(actions) # save the result of testing
print(np.mean(test_total_reward))
###Output
-22.57047264360345
###Markdown
Action list
###Code
print("Action list looks like ", action_list)
print("Action list's shape looks like ", np.shape(action_list))
###Output
Action list looks like [[1, 2, 2, 1, 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 3, 3, 2, 2, 3, 2, 3, 2, 3, 2, 2, 3, 3, 3, 3, 2, 3, 2, 2, 3, 3, 3, 3, 3, 2, 3, 2, 2, 3, 3, 2, 2, 2, 3, 3, 2, 2, 3, 3, 2, 2, 2, 2, 3, 2, 2, 2, 3, 2, 2, 3, 2, 2, 2, 3, 2, 2, 2, 3, 2, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 3, 2, 2, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 2, 3, 2, 3, 2, 3, 2, 2, 2, 3, 3, 3, 2, 2, 3, 3, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 0, 1, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 0, 0, 3, 0, 3, 0, 0, 0, 3, 0, 0, 0, 0, 3, 0, 0, 3, 3, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 3, 3, 3, 0, 3, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 0, 0, 0, 3, 0, 0, 0, 0, 3, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 0, 0, 3, 0, 3, 3, 3, 0, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 0, 0, 0, 3, 0, 3, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 3, 0, 3, 0, 3, 0, 3, 0, 3, 3, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 3, 3, 0, 0, 0, 3, 3, 0, 0, 0, 0, 3, 3, 0, 0, 3, 0, 3, 3, 0, 3, 3, 0, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 3, 3, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 3, 0, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 3, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 3, 3, 3, 0, 0, 0, 3, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 3, 0, 0, 3, 3, 0, 3, 0, 3, 0, 0, 0, 0], [3, 3, 3, 3, 2, 2, 2, 3, 2, 3, 2, 3, 2, 2, 2, 3, 2, 3, 2, 2, 3, 2, 2, 3, 2, 3, 2, 2, 2, 2, 2, 2, 3, 2, 2, 3, 2, 2, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 2, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 2, 2, 2, 2, 3, 3, 3, 2, 3, 2, 3, 3, 2, 3, 3, 3, 2, 2, 3, 2, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 2, 2, 2, 3, 3, 2, 2, 3, 2, 3, 3, 3, 2, 3, 2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 2, 3, 2, 2, 3, 3, 2, 2, 2, 2, 3, 2, 3, 2, 2, 2, 3, 2, 3, 2, 2, 2, 2, 3, 3, 2, 2, 2, 2, 1, 2, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 2, 2, 3, 2, 3, 3, 2, 2, 2, 2, 3, 2, 2], [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 2, 0, 1, 2, 2, 0, 2, 3, 3, 2, 2, 2, 3, 2, 0, 2, 3, 2, 0, 2, 2, 3, 2, 0, 3, 2, 2, 2, 3, 2, 2, 2, 3, 2, 3, 2, 2, 3, 2, 2, 3, 2, 2, 2, 0, 2, 0, 2, 3, 2, 2, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 3, 2, 3, 3, 2, 3, 3, 2, 2, 3, 3, 3, 2, 3, 3, 2, 3, 2, 3, 3, 2, 3, 2, 3, 2, 3, 3, 3, 2, 2, 2, 3, 2, 2, 3, 2, 3, 2, 2, 3, 2, 2, 3, 2, 3, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 3, 3, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 1, 0, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 2, 3, 2, 3, 3, 3, 3, 2, 3, 3, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3, 2, 2, 3, 3, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 2, 1, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 3, 3, 2, 2, 3, 2, 3, 2, 2, 3, 3, 3, 2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 3, 3, 2, 3, 2, 3, 2, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, 3, 2, 2, 2, 2, 3, 2, 2, 3, 2, 3, 2, 2, 3, 2, 2, 2, 2, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 3, 3, 3, 2, 2, 3, 3, 3, 3, 2, 3, 3, 2, 2, 3, 2, 3, 3, 3, 3]]
Action list's shape looks like (5,)
###Markdown
Analysis of actions taken by agent
###Code
distribution = {}
for actions in action_list:
for action in actions:
if action not in distribution.keys():
distribution[action] = 1
else:
distribution[action] += 1
print(distribution)
###Output
{1: 329, 2: 741, 3: 469, 0: 555}
###Markdown
Saving the result of Model Testing
###Code
PATH = "Action_List.npy" # Can be modified into the name or path you want
np.save(PATH ,np.array(action_list))
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
###Markdown
This is the file you need to submit !!!Download the testing result to your device
###Code
from google.colab import files
files.download(PATH)
###Output
_____no_output_____
###Markdown
Server The code below simulate the environment on the judge server. Can be used for testing.
###Code
action_list = np.load(PATH,allow_pickle=True) # The action list you upload
seed = 543 # Do not revise this
fix(env, seed)
agent.network.eval() # set network to evaluation mode
test_total_reward = []
if len(action_list) != 5:
print("Wrong format of file !!!")
exit(0)
for actions in action_list:
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
total_reward = 0
done = False
for action in actions:
state, reward, done, _ = env.step(action)
total_reward += reward
if done:
break
print(f"Your reward is : %.2f"%total_reward)
test_total_reward.append(total_reward)
###Output
/usr/local/lib/python3.7/dist-packages/torch/__init__.py:422: UserWarning: torch.set_deterministic is deprecated and will be removed in a future release. Please use torch.use_deterministic_algorithms instead
"torch.set_deterministic is deprecated and will be removed in a future "
###Markdown
Your score
###Code
print(f"Your final reward is : %.2f"%np.mean(test_total_reward))
###Output
Your final reward is : -22.57
|
examples/ktr.ipynb | ###Markdown
KTR Example
###Code
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.5f' % x)
import matplotlib
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTRLite, KTR
from orbit.utils.features import make_fourier_series_df, make_fourier_series
from orbit.diagnostics.plot import plot_predicted_data, plot_predicted_components
from orbit.diagnostics.metrics import smape
from orbit.utils.dataset import load_iclaims, load_electricity_demand
orbit.__version__
###Output
_____no_output_____
###Markdown
Data
###Code
df = load_iclaims()
DATE_COL = 'week'
RESPONSE_COL = 'claims'
print(df.shape)
df.head()
print(f'starts with {df[DATE_COL].min()}\nends with {df[DATE_COL].max()}\nshape: {df.shape}')
test_size = 52
train_df = df[:-test_size]
test_df = df[-test_size:]
###Output
_____no_output_____
###Markdown
KTR KTR - Full default zero regression_segments
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
# regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
regressor_col=['trend.unemploy'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
regression_segments=0,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1,
ktrlite_optim_args = dict()
)
ktr.fit(train_df)
coef_df = ktr.get_regression_coefs()
coef_df
knot_df = ktr.get_regression_coef_knots()
knot_df
ktr.get_regression_coefs().head()
ktr.get_regression_coef_knots()
ktr.plot_lev_knots(figsize=(16, 8));
ktr.plot_regression_coefs(with_knot=True, include_ci=False, figsize=(16, 8));
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.head()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
###Output
_____no_output_____
###Markdown
multiple regression_segmentsChange `regression_segments=0` args to `regression_segments=5`.
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1,
ktrlite_optim_args = dict()
)
ktr.fit(train_df)
ktr.get_regression_coefs().head()
ktr.get_regression_coef_knots()
ktr.plot_lev_knots(figsize=(16, 8), use_orbit_style=False);
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=False);
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.head()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
knot_df = ktr.get_regression_coef_knots()
knot_df
###Output
_____no_output_____
###Markdown
KTR - Median
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
seasonality_segments=2,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1
)
ktr.fit(df=train_df, point_method='median')
ktr.get_regression_coefs().head()
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.tail()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
###Output
INFO:root:Guessed max_plate_nesting = 1
###Markdown
Electricity data (dual seasoanlity, no regressor)
###Code
# from 2000-01-01 to 2008-12-31
df = load_electricity_demand()
df['electricity'] = np.log(df['electricity'])
DATE_COL = 'date'
RESPONSE_COL = 'electricity'
print(df.shape)
df.head()
test_size = 365
train_df = df[:-test_size]
test_df = df[-test_size:]
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
seasonality=[7, 365.25],
seasonality_fs_order=[2, 5],
level_knot_scale=.1,
level_segments=20,
seasonality_segments=3,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1
)
ktr.fit(df=train_df, point_method='median')
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.tail()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df,
lw=0.5)
###Output
_____no_output_____
###Markdown
KTR Example
###Code
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.5f' % x)
import matplotlib
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTRLite, KTR
from orbit.utils.features import make_fourier_series_df, make_fourier_series
from orbit.diagnostics.plot import plot_predicted_data, plot_predicted_components
from orbit.diagnostics.metrics import smape
from orbit.utils.dataset import load_iclaims, load_electricity_demand
orbit.__version__
###Output
_____no_output_____
###Markdown
Data
###Code
df = load_iclaims()
DATE_COL = 'week'
RESPONSE_COL = 'claims'
print(df.shape)
df.head()
print(f'starts with {df[DATE_COL].min()}\nends with {df[DATE_COL].max()}\nshape: {df.shape}')
test_size = 52
train_df = df[:-test_size]
test_df = df[-test_size:]
###Output
_____no_output_____
###Markdown
KTR KTR - Full zero regression_segments
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
# regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
regressor_col=['trend.unemploy'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
regression_segments=0,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1,
ktrlite_optim_args = dict()
)
ktr.fit(train_df)
coef_df = ktr.get_regression_coefs()
coef_df
knot_df = ktr.get_regression_coef_knots()
knot_df
ktr.get_regression_coefs().head()
ktr.get_regression_coef_knots()
ktr.plot_lev_knots(figsize=(16, 8));
ktr.plot_regression_coefs(with_knot=True, include_ci=False, figsize=(16, 8));
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.head()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
###Output
_____no_output_____
###Markdown
multiple regression_segmentsChange `regression_segments=0` args to `regression_segments=5`.
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
# regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
regressor_col=['trend.unemploy'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1,
ktrlite_optim_args = dict()
)
ktr.fit(train_df)
ktr.get_regression_coefs().head()
ktr.get_regression_coef_knots()
ktr.plot_lev_knots(figsize=(16, 8), use_orbit_style=False);
ktr.plot_regression_coefs(with_knot=True, figsize=(10, 5), include_ci=False);
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.head()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
knot_df = ktr.get_regression_coef_knots()
knot_df
###Output
_____no_output_____
###Markdown
KTR - Median
###Code
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'],
seasonality=[52],
seasonality_fs_order=[3],
level_knot_scale=.1,
level_segments=10,
seasonality_segments=2,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1
)
ktr.fit(df=train_df, point_method='median')
ktr.get_regression_coefs().head()
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.tail()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df)
###Output
INFO:root:Guessed max_plate_nesting = 1
###Markdown
Electricity data (dual seasoanlity, no regressor)
###Code
# from 2000-01-01 to 2008-12-31
df = load_electricity_demand()
df['electricity'] = np.log(df['electricity'])
DATE_COL = 'date'
RESPONSE_COL = 'electricity'
print(df.shape)
df.head()
test_size = 365
train_df = df[:-test_size]
test_df = df[-test_size:]
ktr = KTR(
date_col=DATE_COL,
response_col=RESPONSE_COL,
seasonality=[7, 365.25],
seasonality_fs_order=[2, 5],
level_knot_scale=.1,
level_segments=20,
seasonality_segments=3,
regression_segments=5,
regression_rho=0.15,
# pyro optimization parameters
seed=8888,
num_steps=1000,
num_sample=1000,
learning_rate=0.1,
estimator='pyro-svi',
n_bootstrap_draws=-1
)
ktr.fit(df=train_df, point_method='median')
predicted_df = ktr.predict(df=test_df, decompose=True)
predicted_df.tail()
f"SMAPE: {smape(predicted_df['prediction'].values, test_df[RESPONSE_COL].values):.2%}"
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col=DATE_COL,
actual_col=RESPONSE_COL,
test_actual_df=test_df,
lw=0.5)
###Output
_____no_output_____ |
1.Longitudinal and Lateral Control/Longitidunal_Vehicle_Model/Longitidunal Vehicle Dynamic Modeling.ipynb | ###Markdown
###Code
#Lets import the libraries
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#Lets create the Vehicle class and define its attributes and methods
class Vehicle():
def __init__(self):
# ==================================
# Parameters
# ==================================
#Throttle to engine torque
self.a_0 = 400
self.a_1 = 0.1
self.a_2 = -0.0002
# Gear ratio, effective radius, mass + inertia
self.GR = 0.35
self.r_e = 0.3
self.J_e = 10
self.m = 2000
self.g = 9.81
# Aerodynamic and friction coefficients
self.c_a = 1.36
self.c_r1 = 0.01
# Tire force
self.c = 10000
self.F_max = 10000
# State variables
self.x = 0
self.v = 5
self.a = 0
self.w_e = 100
self.w_e_dot = 0
self.sample_time = 0.01
def reset(self):
# reset state variables
self.x = 0
self.v = 5
self.a = 0
self.w_e = 100
self.w_e_dot = 0
#Lets create the step method and apply longitidunal dynamic formulas to move the car.
class Vehicle(Vehicle):
def step(self, throttle, alpha):
Te = throttle *(self.a_0 + self.a_1*self.w_e + self.a_2*self.w_e**2)
F_aero = self.c_a*self.v**2
Rx = self.c_r1*self.v
Fg = self.m*self.g*np.sin(alpha)
F_load = F_aero + Rx + Fg
self.w_e_dot = (Te - (self.GR)*(self.r_e*F_load))/self.J_e
ww = (self.GR)*self.w_e
s = (ww*self.r_e - self.v) / self.v
if abs(s) < 1:
Fx = self.c * s
else:
Fx = self.F_max
self.a = (Fx - F_load) / self.m
self.v = self.v + self.a*self.sample_time
self.x = self.x + self.v*self.sample_time - (0.5*self.a*self.sample_time**2)
self.w_e = self.w_e + self.w_e_dot*self.sample_time
pass
#Type One: Move the car with constant throttle, without using alpha value
sample_time = 0.01
time_end = 100
model = Vehicle()
t_data = np.arange(0,time_end,sample_time)
v_data = np.zeros_like(t_data)
# throttle percentage between 0 and 1
throttle = 0.2
# incline angle (in radians)
alpha = 0
for i in range(len(t_data)):
v_data[i] = model.v
model.step(throttle, alpha)
plt.plot(t_data, v_data)
plt.show()
#Type 2: Move the car with alpha value, decrease or increase throttle according to alpha, on this path
###Output
_____no_output_____
###Markdown
###Code
time_end = 20
t_data = np.arange(0,time_end,sample_time)
x_data = np.zeros_like(t_data)
alpha = np.zeros_like(t_data)
throttle = np.zeros_like(t_data)
a_data = np.zeros_like(t_data)
# reset the states
model.reset()
def alphaa(i,alpha,x):
if x < 60:
alpha[i] = np.arctan(3/60)
elif x < 150:
alpha[i] = np.arctan(9/90)
else:
alpha[i] = 0
for i in range(len(t_data)):
if t_data[i] < 5 :
throttle[i] = 0.2 + ((0.5-0.2)/5)*t_data[i]
alphaa(i,alpha,model.x)
elif t_data[i] < 15:
throttle[i] = 0.5
alphaa(i,alpha,model.x)
else:
throttle[i] = ((0 - 0.5)/(20 - 15))*(t_data[i] - 20)
alphaa(i,alpha,model.x)
model.step(throttle[i],alpha[i])
x_data[i] = model.x
a_data[i] = model.a
#Lets see the results: 1. graph = path , 2. graph = throttle output , 3. graph = acceleration output
plt.figure(1)
plt.subplot(311)
plt.plot(t_data, x_data)
plt.subplot(312)
plt.plot(t_data, throttle)
plt.subplot(313)
plt.plot(t_data, a_data)
###Output
_____no_output_____ |
Pertemuan 1 - Introduction/1. Logic, Control Flow and Filtering .ipynb | ###Markdown
Mengingat NumPy
###Code
import numpy as np
np_tinggi = np.array([1.73, 1.68, 1.71, 1.89, 1.79]) #m
np_berat = np.array([65.4, 59.2, 63.6, 88.4, 68.7]) #kg
bmi = np_berat / np_tinggi ** 2
bmi
bmi > 23
bmi[bmi>23]
###Output
_____no_output_____
###Markdown
Komparasi Numerik
###Code
2 < 3
2 == 3
2 <= 3
3 <= 3
x = 2
y = 3
x >= y
###Output
_____no_output_____
###Markdown
Komparasi yang lain
###Code
'dian' > 'diantemi'
3 < 'dian' #tidak bisa
3 < 4.1
bmi
bmi > 23
# <
# <=
# >
# >=
# ==
# !=
###Output
_____no_output_____
###Markdown
Boolean Operator
###Code
# and, or, not
True and True
False and True
True and False
False and False
x = 12
x > 5 and x < 15
True or True
False or True
True or False
False or False
y = 5
y < 7 or y > 13
not True
not False
###Output
_____no_output_____
###Markdown
Kembali ke NumPy
###Code
bmi
bmi > 21
bmi < 22
bmi > 21 and bmi <22 #akan error karena memunculkan perintah ambigu
np.logical_or(bmi > 21, bmi < 22) #np.logical_and np.logical_or np.logical_or #saran pakai np (numpy)
np.logical_and(bmi>21,bmi<22)
bmi[np.logical_and(bmi>21,bmi<22)] #based on intersection/irisan
bmi[np.logical_or(bmi > 21, bmi < 22)] #gabungan
###Output
_____no_output_____
###Markdown
if, elif, else (conditional statement)
###Code
z = 7
if z % 2 == 0 :
print("z adalah genap")
else :
print("z adalah ganjil")
z = 4
if z % 2 == 0:
print("Mengecek " + str(z))
print("z adalah genap")
z = 5 # TIDAK TEREKSEKUSI KARENA FALSE
if z % 2 == 0:
print("Mengecek " + str(z))
print("z adalah genap")
z = 5
if z % 2 == 0 :
print("z adalah genap")
else :
print("z adalah ganjil")
z = 11
if z % 2 == 0:
print("z habis dibagi 2")
elif z % 3 == 0:
print("z habis dibagi 3")
else :
print("z tidak habis dibagi 2 maupun 3")
z = 6
if z % 2 == 0:
print("z habis dibagi 2")
elif z % 3 == 0:
print("z habis dibagi 3")
else :
print("z tidak habis dibagi 2 maupun 3")
###Output
z habis dibagi 2
###Markdown
Filtering pandas
###Code
#access from drive
#access data o link gdrive
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.listdir('/content/gdrive/My Drive/DSC-UTM/Bagian-2-Python-Lanjutan/')
path_data = ('/content/gdrive/My Drive/DSC-UTM/Bagian-2-Python-Lanjutan/')
import pandas as pd
df = pd.read_csv(path_data+'negara.csv', index_col=0)
df
###Output
_____no_output_____
###Markdown
- Tujuan: Mencari Negara dengan dengan luas diatas 8 juta km^2
###Code
df['area'] # alternatif df.loc[:,"area"] atau df.iloc[:,2]
df['area'] > 8
#fiter data dengan area >8
area_8 = df[df['area']>8]
area_8
###Output
_____no_output_____
###Markdown
Menggunakan Boolean Operator
###Code
import numpy as np
np.logical_and(df["area"] > 8, df["area"] < 10)
df[np.logical_and(df["area"] > 8, df["area"] < 10)]
###Output
_____no_output_____
###Markdown
negara mana dengan populasi lebih dari 1000 dan area lebih dari 3.5?
###Code
df[np.logical_and(df["population"]>1000,df["area"]>3.5)]
###Output
_____no_output_____
###Markdown
Carilah kepadatan populasi(population density) dan temukan 3 negara kepadatan >100$density$ = $\frac{population}{area}$
###Code
df['density'] = df['population']/df['area']
df
df['country'][df['density']>100]
###Output
_____no_output_____ |
PythonScripts/Paper1Figures/convince_me_about_transports.ipynb | ###Markdown
Where are the tracer and water going?Convince me that I can track where and when is the water that builds up the blob coming onto the shelf. What happens when there is no canyon? What happens when there is a canyon? What is that steady state? Can I say tracer keeps coming onto shelf or is it just that it hasn't had enough time to reach the steady state?
###Code
#import gsw as sw # Gibbs seawater package
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import matplotlib.gridspec as gspec
%matplotlib inline
from netCDF4 import Dataset
import numpy as np
import pandas as pd
import seaborn as sns
import sys
import xarray as xr
import canyon_tools.readout_tools as rout
import canyon_tools.metrics_tools as mpt
sns.set_context('paper')
sns.set_style('white')
def plotCSPos(ax,CS1,CS2,CS3,CS4):
ax.axvline(CS1,color='k',linestyle=':')
ax.axvline(CS2,color='k',linestyle=':')
ax.axvline(CS3,color='k',linestyle=':')
ax.axvline(CS4,color='k',linestyle=':')
def unstagger_xarray(qty, index):
"""Interpolate u, v, or w component values to values at grid cell centres.
Named indexing requires that input arrays are XArray DataArrays.
:arg qty: u, v, or w component values
:type qty: :py:class:`xarray.DataArray`
:arg index: index name along which to centre
(generally one of 'gridX', 'gridY', or 'depth')
:type index: str
:returns qty: u, v, or w component values at grid cell centres
:rtype: :py:class:`xarray.DataArray`
"""
qty = (qty + qty.shift(**{index: 1})) / 2
return qty
#Exp
CGrid = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/gridGlob.nc'
CGridOut = Dataset(CGrid)
CGridNoC = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/gridGlob.nc'
CGridNoCOut = Dataset(CGridNoC)
Ptracers = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/ptracersGlob.nc'
PtracersOut = Dataset(Ptracers)
PtracersNoC = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/ptracersGlob.nc'
PtracersOutNoC = Dataset(PtracersNoC)
State = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/stateGlob.nc'
StateNoC = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/stateGlob.nc'
flux_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/FluxTR01Glob.nc'
fluxNoC_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/FluxTR01Glob.nc'
grid = xr.open_dataset(CGrid)
grid_NoC = xr.open_dataset(CGridNoC)
flux_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run38/FluxTR01Glob.nc'
fluxNoC_file = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run42/FluxTR01Glob.nc'
flux = xr.open_dataset(flux_file)
fluxNoC = xr.open_dataset(fluxNoC_file)
state = xr.open_dataset(State)
adv_flux_AP = (flux.ADVyTr01[7:13,:,227,:]-fluxNoC.ADVyTr01[7:13,:,227,:]).mean(dim='T')
dif_flux_AP = (flux.DFyETr01[7:13,:,227,:]-fluxNoC.DFyETr01[7:13,:,227,:]).mean(dim='T')
Flux = adv_flux_AP + dif_flux_AP
Flux_can = (flux.ADVyTr01[7:13,:,227,:]).mean(dim='T') + (flux.DFyETr01[7:13,:,227,:]).mean(dim='T')
Flux_NoC = (fluxNoC.ADVyTr01[7:13,:,227,:]).mean(dim='T') + (fluxNoC.DFyETr01[7:13,:,227,:]).mean(dim='T')
adv_fluxV_AP = (flux.ADVrTr01[7:13,30,:,:]-fluxNoC.ADVrTr01[7:13,30,:,:]).mean(dim='T')
dif_fluxV_AP = (flux.DFrITr01[7:13,30,:,:]+flux.DFrETr01[7:13,30,:,:]-
(fluxNoC.DFrITr01[7:13,30,:,:]+fluxNoC.DFrETr01[7:13,30,:,:])).mean(dim='T')
FluxV = adv_fluxV_AP + dif_fluxV_AP
FluxV_can = ((flux.ADVrTr01[7:13,30,:,:]).mean(dim='T') +
(flux.DFrITr01[7:13,30,:,:]+flux.DFrETr01[7:13,30,:,:]).mean(dim='T'))
FluxV_NoC = ((fluxNoC.ADVrTr01[7:13,30,:,:]).mean(dim='T') +
(fluxNoC.DFrITr01[7:13,30,:,:]+fluxNoC.DFrETr01[7:13,30,:,:]).mean(dim='T'))
# General input
nx = 360
ny = 360
nz = 90
nt = 19 # t dimension size
numTr = 22 # number of tracers in total (CNT =22, 3D = 4, total = 19)
rc = CGridNoCOut.variables['RC']
dxf = CGridNoCOut.variables['dxF']
xc = rout.getField(CGridNoC, 'XC') # x coords tracer cells
yc = rout.getField(CGridNoC, 'YC') # y coords tracer cells
rA = rout.getField(CGridNoC, 'rA')
drF = CGridNoCOut.variables['drF'] # vertical distance between faces
drC = CGridNoCOut.variables['drC'] # vertical distance between centers
hFacC = rout.getField(CGridNoC, 'HFacC')
mask_NoC = rout.getMask(CGridNoC, 'HFacC')
times = np.arange(0,nt,1)
#print(drC[:])
#print(np.shape(drC))
import canyon_records
import nocanyon_records
records = canyon_records.main()
recordsNoC = nocanyon_records.main()
select_rec=[0]
###Output
_____no_output_____
###Markdown
Anomaly transports
###Code
plt.rcParams['font.size'] = 8.0
f = plt.figure(figsize = (7.48,4.5)) # full page
gs = gspec.GridSpec(2, 1, height_ratios=[0.8,1.3])
gs0 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[0,0],wspace=0.15)
gs1 = gspec.GridSpecFromSubplotSpec(2, 2, subplot_spec=gs[1,0],hspace=0.1,wspace=0.15,width_ratios=[1,0.6])
ax0 = plt.subplot(gs0[0,0])
ax1 = plt.subplot(gs0[0,1])
ax2 = plt.subplot(gs1[0,0],xticks=[])
ax3 = plt.subplot(gs1[1,0])
ax4 = plt.subplot(gs1[1,1])
ii=7
yind = 227
areas = (np.expand_dims(grid.dxF.isel(X=slice(60,300),Y=yind).data,0))*(np.expand_dims(grid.drF.isel(Z=slice(0,60)).data,1))
# Full shelf ---------------------------------------------------------------------------
cnt=ax3.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,60)),
Flux.isel(Zmd000090=slice(0,60),X=slice(60,300))/areas,16,cmap='RdYlBu_r',vmax=2.5, vmin=-2.5)
ax3.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,60)),
grid.HFacC.isel(Z=slice(0,60),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.13, 0.17, 0.17, 0.03])
cb=f.colorbar(cnt, cax=cbar_ax,orientation='horizontal',ticks=[-2,-1,0,1,2])
ax3.axhline(y=grid.Z[30], linestyle=':',color='k')
ax3.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax3.set_ylabel('Depth (m)',labelpad=0.5)
ax3.text(0.11,0.5,'$\mu$M ms$^{-1}$',transform=ax3.transAxes)
# Zoom shelf ---------------------------------------------------------------------------
cnt=ax2.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,30)),
Flux.isel(Zmd000090=slice(0,30),X=slice(60,300))/areas[:30,:],16,cmap='RdYlBu_r',vmax=0.5, vmin=-0.5)
ax2.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,30)),
grid.HFacC.isel(Z=slice(0,30),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.585, 0.36, 0.02, 0.2])
cb=f.colorbar(cnt, cax=cbar_ax)
ax2.set_ylabel('Depth (m)',labelpad=0.5)
ax2.text(0.85,0.86,'$\mu$M ms$^{-1}$',transform=ax2.transAxes)
# Time series ---------------------------------------------------------------------------
ax0.axhline(0,color='0.8',linewidth=2)
ax1.axhline(0,color='0.8',linewidth=2)
ind = 0
file = (('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s' %(records[ind].exp_code,records[ind].run_num))+
'advTracer_CS_transports.nc')
filedif = (('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s' %(records[ind].exp_code,records[ind].run_num))+
'difTracer_CS_transports.nc')
fileNoC = (('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s' %(recordsNoC[ind].exp_code,recordsNoC[ind].run_num))+
'advTracer_CS_transports.nc')
dfcan = xr.open_dataset(file)
dfdif = xr.open_dataset(filedif)
dfnoc = xr.open_dataset(fileNoC)
vertical = (dfdif.Vert_dif_trans_sb + dfcan.Vert_adv_trans_sb)
ax0.plot(np.arange(1,19,1)/2.0,(vertical)/1E5,':',color='k')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS1_adv_trans -dfnoc.CS1_adv_trans )/1E5,color='0.3')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS2_adv_trans - (dfnoc.CS2_adv_trans ))/1E5,color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS3_adv_trans - (dfnoc.CS3_adv_trans ))/1E5,color='k')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS4_adv_trans - (dfnoc.CS4_adv_trans ))/1E5,':',color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS5_adv_trans - (dfnoc.CS5_adv_trans ))/1E5,color='0.3')
total = ( (dfcan.CS1_adv_trans )- (dfnoc.CS1_adv_trans ) +
(dfcan.CS2_adv_trans )- (dfnoc.CS2_adv_trans ) +
(dfcan.CS3_adv_trans )- (dfnoc.CS3_adv_trans ) +
(dfcan.CS4_adv_trans )- (dfnoc.CS4_adv_trans ) +
(dfcan.CS5_adv_trans )- (dfnoc.CS5_adv_trans ) +
vertical)
ax0.plot(np.arange(1,19,1)/2.0,total/1E5,'--',color='k')
ax0.set_xlabel('Days',labelpad=0.5)
ax0.set_ylabel('($10^5$ $\mu$M m$^3$s$^{-1}$)',labelpad=0.5)
file2 = (('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s' %(records[ind].exp_code,records[ind].run_num))+
'water_CS_transports.nc')
fileNoC2 = (('/ocean/kramosmu/MITgcm/TracerExperiments/%s/%s' %(recordsNoC[ind].exp_code,recordsNoC[ind].run_num))+
'water_CS_transports.nc')
dfcan2 = xr.open_dataset(file2)
dfnoc2 = xr.open_dataset(fileNoC2)
ax1.plot(np.arange(19)/2.0,(dfcan2.Vert_water_trans_sb-dfnoc2.Vert_water_trans_sb)/1E4,':',color='k',label = 'LID')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS1_water_trans-dfnoc2.CS1_water_trans)/1E4,color='0.4',label = 'CS1')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS2_water_trans-dfnoc2.CS2_water_trans)/1E4,color='0.6',label = 'CS2')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS3_water_trans-dfnoc2.CS3_water_trans)/1E4,color='0.8',label = 'CS3')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS4_water_trans-dfnoc2.CS4_water_trans)/1E4,':',color='0.5',label= 'CS4')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS5_water_trans-dfnoc2.CS5_water_trans)/1E4,color='k',label = 'CS5')
total = (dfcan2.CS1_water_trans-dfnoc2.CS1_water_trans +
dfcan2.CS2_water_trans-dfnoc2.CS2_water_trans +
dfcan2.CS3_water_trans-dfnoc2.CS3_water_trans +
dfcan2.CS4_water_trans-dfnoc2.CS4_water_trans +
dfcan2.CS5_water_trans-dfnoc2.CS5_water_trans +
dfcan2.Vert_water_trans_sb-dfnoc2.Vert_water_trans_sb)
ax1.plot(np.arange(19)/2.0,total/1E4,'--',color='k',label = 'Total')
ax1.set_xlabel('Days',labelpad=0.5)
ax1.set_ylabel('(10$^{4}$ m$^3$s$^{-1}$)',labelpad=-4)
# Vertical section ---------------------------------------------------------------------------
cnt=ax4.contourf(grid.X.isel(X=slice(120,240))/1000,grid.Y.isel(Y=slice(225,270))/1000,
(FluxV.isel(X=slice(120,240),Y=slice(225,270)).data)/(grid.rA[225:270,120:240]),
16,cmap='RdYlBu_r',vmax=0.025, vmin=-0.025)
ax4.contourf(grid.X.isel(X=slice(120,240))/1000,grid.Y.isel(Y=slice(225,270))/1000,
grid.HFacC.isel(Z=30,X=slice(120,240),Y=slice(225,270)),[0,0.1])
cbar_ax = f.add_axes([0.91, 0.12, 0.02, 0.21])
cb=f.colorbar(cnt, cax=cbar_ax)
ax4.set_aspect(1)
ax4.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax4.set_ylabel('CS distance (km)',labelpad=0.5)
ax4.text(0.75,0.85,'$\mu$M ms$^{-1}$',transform=ax4.transAxes)
# General looks
ax0.text(0.6,0.1,'(a) Tracer transport',transform=ax0.transAxes)
ax1.text(0.6,0.1,'(b) Water transport',transform=ax1.transAxes)
ax2.text(0.01,0.9,'(c)',transform=ax2.transAxes)
ax3.text(0.01,0.9,'(d)',transform=ax3.transAxes)
ax4.text(0.02,0.9,'(e) LID',transform=ax4.transAxes)
ax2.text(0.24,0.9,'CS2',transform=ax2.transAxes)
ax2.text(0.47,0.9,'CS3',transform=ax2.transAxes)
ax2.text(0.7,0.9,'CS4',transform=ax2.transAxes)
plotCSPos(ax2,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
plotCSPos(ax3,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
#ax2.set_ylim(0,2.2)
#ax3.set_ylim(0,15)
ax1.legend(ncol=2,bbox_to_anchor=(0.97,-0.3))
ax0.tick_params(axis='x', pad=1)
ax1.tick_params(axis='x', pad=1)
ax3.tick_params(axis='x', pad=1)
ax4.tick_params(axis='x', pad=1)
ax0.tick_params(axis='y', pad=3)
ax1.tick_params(axis='y', pad=3)
ax2.tick_params(axis='y', pad=3)
ax3.tick_params(axis='y', pad=3)
ax4.tick_params(axis='y', pad=3)
###Output
_____no_output_____
###Markdown
Full canyon case
###Code
plt.rcParams['font.size'] = 8.0
f = plt.figure(figsize = (7.48,4.5)) # full page
gs = gspec.GridSpec(2, 1, height_ratios=[0.8,1.3])
gs0 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[0,0],wspace=0.15)
gs1 = gspec.GridSpecFromSubplotSpec(2, 2, subplot_spec=gs[1,0],hspace=0.1,wspace=0.15,width_ratios=[1,0.6])
ax0 = plt.subplot(gs0[0,0])
ax1 = plt.subplot(gs0[0,1])
ax2 = plt.subplot(gs1[0,0],xticks=[])
ax3 = plt.subplot(gs1[1,0])
ax4 = plt.subplot(gs1[1,1])
ii=7
yind = 227
areas = (np.expand_dims(grid.dxF.isel(X=slice(60,300),Y=yind).data,0))*(np.expand_dims(grid.drF.isel(Z=slice(0,60)).data,1))
# Full shelf ---------------------------------------------------------------------------
cnt=ax3.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,60)),
Flux_can.isel(Zmd000090=slice(0,60),X=slice(60,300))/areas,16,cmap='RdYlBu_r',vmax=2.5, vmin=-2.5)
ax3.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,60)),
grid.HFacC.isel(Z=slice(0,60),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.13, 0.17, 0.17, 0.03])
cb=f.colorbar(cnt, cax=cbar_ax,orientation='horizontal',ticks=[-2,-1,0,1,2])
ax3.axhline(y=grid.Z[30], linestyle=':',color='k')
ax3.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax3.set_ylabel('Depth (m)',labelpad=0.5)
ax3.text(0.11,0.5,'$\mu$M ms$^{-1}$',transform=ax3.transAxes)
# Zoom shelf ---------------------------------------------------------------------------
cnt=ax2.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,30)),
Flux_can.isel(Zmd000090=slice(0,30),X=slice(60,300))/areas[:30,:],16,cmap='RdYlBu_r',vmax=0.5, vmin=-0.5)
ax2.contourf(grid.X.isel(X=slice(60,300))/1000,grid.Z.isel(Z=slice(0,30)),
grid.HFacC.isel(Z=slice(0,30),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.585, 0.36, 0.02, 0.2])
cb=f.colorbar(cnt, cax=cbar_ax)
ax2.set_ylabel('Depth (m)',labelpad=0.5)
ax2.text(0.85,0.86,'$\mu$M ms$^{-1}$',transform=ax2.transAxes)
# Time series ---------------------------------------------------------------------------
ax0.axhline(0,color='0.8',linewidth=2)
ax1.axhline(0,color='0.8',linewidth=2)
vertical = (dfdif.Vert_dif_trans_sb + dfcan.Vert_adv_trans_sb)
ax0.plot(np.arange(1,19,1)/2.0,(vertical)/1E5,':',color='k')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS1_adv_trans)/1E5,color='0.3')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS2_adv_trans)/1E5,color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS3_adv_trans)/1E5,color='k')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS4_adv_trans)/1E5,':',color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,(dfcan.CS2_adv_trans)/1E5,color='0.3')
total = ((dfcan.CS1_adv_trans) +
(dfcan.CS2_adv_trans) +
(dfcan.CS3_adv_trans) +
(dfcan.CS4_adv_trans) +
(dfcan.CS5_adv_trans) +
vertical)
ax0.plot(np.arange(1,19,1)/2.0,total/1E5,'--',color='k')
ax0.set_xlabel('Days',labelpad=0.5)
ax0.set_ylabel('($10^5$ $\mu$M m$^3$s$^{-1}$)',labelpad=0.5)
ax1.plot(np.arange(19)/2.0,(dfcan2.Vert_water_trans_sb)/1E4,':',color='k',label = 'LID')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS1_water_trans)/1E4,color='0.4',label = 'CS1')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS2_water_trans)/1E4,color='0.6',label = 'CS2')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS3_water_trans)/1E4,color='0.8',label = 'CS3')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS4_water_trans)/1E4,':',color='0.5',label= 'CS4')
ax1.plot(np.arange(19)/2.0,(dfcan2.CS5_water_trans)/1E4,color='k',label = 'CS5')
total = (dfcan2.CS1_water_trans +
dfcan2.CS2_water_trans +
dfcan2.CS3_water_trans +
dfcan2.CS4_water_trans +
dfcan2.CS5_water_trans +
dfcan2.Vert_water_trans_sb)
ax1.plot(np.arange(19)/2.0,total/1E4,'--',color='k',label = 'Total')
ax1.set_xlabel('Days',labelpad=0.5)
ax1.set_ylabel('(10$^{4}$ m$^3$s$^{-1}$)',labelpad=-4)
# Vertical section ---------------------------------------------------------------------------
cnt=ax4.contourf(grid.X.isel(X=slice(120,240))/1000,grid.Y.isel(Y=slice(225,270))/1000,
(FluxV_can.isel(X=slice(120,240),Y=slice(225,270)).data)/(grid.rA[225:270,120:240]),
16,cmap='RdYlBu_r',vmax=0.025, vmin=-0.025)
ax4.contourf(grid.X.isel(X=slice(120,240))/1000,grid.Y.isel(Y=slice(225,270))/1000,
grid.HFacC.isel(Z=30,X=slice(120,240),Y=slice(225,270)),[0,0.1])
cbar_ax = f.add_axes([0.91, 0.12, 0.02, 0.21])
cb=f.colorbar(cnt, cax=cbar_ax)
ax4.set_aspect(1)
ax4.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax4.set_ylabel('CS distance (km)',labelpad=0.5)
ax4.text(0.75,0.85,'$\mu$M ms$^{-1}$',transform=ax4.transAxes)
# General looks
ax0.text(0.6,0.1,'(a) Tracer transport',transform=ax0.transAxes)
ax1.text(0.6,0.1,'(b) Water transport',transform=ax1.transAxes)
ax2.text(0.01,0.9,'(c)',transform=ax2.transAxes)
ax3.text(0.01,0.9,'(d)',transform=ax3.transAxes)
ax4.text(0.02,0.9,'(e) LID',transform=ax4.transAxes)
ax2.text(0.24,0.9,'CS2',transform=ax2.transAxes)
ax2.text(0.47,0.9,'CS3',transform=ax2.transAxes)
ax2.text(0.7,0.9,'CS4',transform=ax2.transAxes)
plotCSPos(ax2,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
plotCSPos(ax3,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
#ax2.set_ylim(0,2.2)
#ax3.set_ylim(0,15)
ax1.legend(ncol=2,bbox_to_anchor=(0.97,-0.3))
ax0.tick_params(axis='x', pad=1)
ax1.tick_params(axis='x', pad=1)
ax3.tick_params(axis='x', pad=1)
ax4.tick_params(axis='x', pad=1)
ax0.tick_params(axis='y', pad=3)
ax1.tick_params(axis='y', pad=3)
ax2.tick_params(axis='y', pad=3)
ax3.tick_params(axis='y', pad=3)
ax4.tick_params(axis='y', pad=3)
plt.rcParams['font.size'] = 8.0
f = plt.figure(figsize = (7.48,4.5)) # full page
gs = gspec.GridSpec(2, 1, height_ratios=[0.8,1.3])
gs0 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[0,0],wspace=0.15)
gs1 = gspec.GridSpecFromSubplotSpec(2, 2, subplot_spec=gs[1,0],hspace=0.1,wspace=0.15,width_ratios=[1,0.6])
ax0 = plt.subplot(gs0[0,0])
ax1 = plt.subplot(gs0[0,1])
ax2 = plt.subplot(gs1[0,0],xticks=[])
ax3 = plt.subplot(gs1[1,0])
ax4 = plt.subplot(gs1[1,1])
ii=7
yind = 227
areas = (np.expand_dims(grid.dxF.isel(X=slice(60,300),Y=yind).data,0))*(np.expand_dims(grid.drF.isel(Z=slice(0,60)).data,1))
# Full shelf ---------------------------------------------------------------------------
cnt=ax3.contourf(grid_NoC.X.isel(X=slice(60,300))/1000,grid_NoC.Z.isel(Z=slice(0,60)),
Flux_NoC.isel(Zmd000090=slice(0,60),X=slice(60,300))/areas,16,cmap='RdYlBu_r',vmax=2.5, vmin=-2.5)
ax3.contourf(grid_NoC.X.isel(X=slice(60,300))/1000,grid_NoC.Z.isel(Z=slice(0,60)),
grid_NoC.HFacC.isel(Z=slice(0,60),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.13, 0.17, 0.17, 0.03])
cb=f.colorbar(cnt, cax=cbar_ax,orientation='horizontal',ticks=[-2,-1,0,1,2])
ax3.axhline(y=grid_NoC.Z[30], linestyle=':',color='k')
ax3.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax3.set_ylabel('Depth (m)',labelpad=0.5)
ax3.text(0.11,0.5,'$\mu$M ms$^{-1}$',transform=ax3.transAxes)
# Zoom shelf ---------------------------------------------------------------------------
cnt=ax2.contourf(grid_NoC.X.isel(X=slice(60,300))/1000,grid_NoC.Z.isel(Z=slice(0,30)),
Flux_NoC.isel(Zmd000090=slice(0,30),X=slice(60,300))/areas[:30,:],16,cmap='RdYlBu_r',vmax=0.5, vmin=-0.5)
ax2.contourf(grid_NoC.X.isel(X=slice(60,300))/1000,grid_NoC.Z.isel(Z=slice(0,30)),
grid_NoC.HFacC.isel(Z=slice(0,30),Y=227,X=slice(60,300)),[0,0.1])
cbar_ax = f.add_axes([0.585, 0.36, 0.02, 0.2])
cb=f.colorbar(cnt, cax=cbar_ax)
ax2.set_ylabel('Depth (m)',labelpad=0.5)
ax2.text(0.85,0.86,'$\mu$M ms$^{-1}$',transform=ax2.transAxes)
# Time series ---------------------------------------------------------------------------
ax0.axhline(0,color='0.8',linewidth=2)
ax1.axhline(0,color='0.8',linewidth=2)
ind = 1
ax0.plot(np.arange(1,19,1)/2.0,( dfnoc.CS1_adv_trans )/1E5,color='0.3')
ax0.plot(np.arange(1,19,1)/2.0,( dfnoc.CS2_adv_trans )/1E5,color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,( dfnoc.CS3_adv_trans )/1E5,color='k')
ax0.plot(np.arange(1,19,1)/2.0,( dfnoc.CS4_adv_trans )/1E5,':',color='0.5')
ax0.plot(np.arange(1,19,1)/2.0,( dfnoc.CS5_adv_trans )/1E5,color='0.3')
total = ( dfnoc.CS1_adv_trans +
dfnoc.CS2_adv_trans +
dfnoc.CS3_adv_trans +
dfnoc.CS4_adv_trans +
dfnoc.CS5_adv_trans )
ax0.plot(np.arange(1,19,1)/2.0,total/1E5,'--',color='k')
ax0.set_xlabel('Days',labelpad=0.5)
ax0.set_ylabel('($10^5$ $\mu$M m$^3$s$^{-1}$)',labelpad=0.5)
ax1.plot(np.arange(19)/2.0,(dfnoc2.Vert_water_trans_sb)/1E4,':',color='k',label = 'LID')
ax1.plot(np.arange(19)/2.0,(dfnoc2.CS1_water_trans)/1E4,color='0.4',label = 'CS1')
ax1.plot(np.arange(19)/2.0,(dfnoc2.CS2_water_trans)/1E4,color='0.6',label = 'CS2')
ax1.plot(np.arange(19)/2.0,(dfnoc2.CS3_water_trans)/1E4,color='0.8',label = 'CS3')
ax1.plot(np.arange(19)/2.0,(dfnoc2.CS4_water_trans)/1E4,':',color='0.5',label= 'CS4')
ax1.plot(np.arange(19)/2.0,(dfnoc2.CS5_water_trans)/1E4,color='k',label = 'CS5')
total = (dfnoc2.CS1_water_trans +
dfnoc2.CS2_water_trans +
dfnoc2.CS3_water_trans +
dfnoc2.CS4_water_trans +
dfnoc2.CS5_water_trans +
dfnoc2.Vert_water_trans_sb)
ax1.plot(np.arange(19)/2.0,total/1E4,'--',color='k',label = 'Total')
ax1.set_xlabel('Days',labelpad=0.5)
ax1.set_ylabel('(10$^{4}$ m$^3$s$^{-1}$)',labelpad=-4)
# Vertical section ---------------------------------------------------------------------------
cnt=ax4.contourf(grid_NoC.X.isel(X=slice(120,240))/1000,grid_NoC.Y.isel(Y=slice(225,270))/1000,
(FluxV_NoC.isel(X=slice(120,240),Y=slice(225,270)).data)/(grid.rA[225:270,120:240]),
16,cmap='RdYlBu_r',vmax=0.025, vmin=-0.025)
ax4.contourf(grid_NoC.X.isel(X=slice(120,240))/1000,grid_NoC.Y.isel(Y=slice(225,270))/1000,
grid_NoC.HFacC.isel(Z=30,X=slice(120,240),Y=slice(225,270)),[0,0.1])
cbar_ax = f.add_axes([0.91, 0.12, 0.02, 0.21])
cb=f.colorbar(cnt, cax=cbar_ax)
ax4.set_aspect(1)
ax4.set_xlabel('Alongshore distance (km)',labelpad=0.5)
ax4.set_ylabel('CS distance (km)',labelpad=0.5)
ax4.text(0.75,0.85,'$\mu$M ms$^{-1}$',transform=ax4.transAxes)
# General looks
ax0.text(0.6,0.1,'(a) Tracer transport',transform=ax0.transAxes)
ax1.text(0.6,0.1,'(b) Water transport',transform=ax1.transAxes)
ax2.text(0.01,0.9,'(c)',transform=ax2.transAxes)
ax3.text(0.01,0.9,'(d)',transform=ax3.transAxes)
ax4.text(0.02,0.9,'(e) LID',transform=ax4.transAxes)
ax2.text(0.24,0.9,'CS2',transform=ax2.transAxes)
ax2.text(0.47,0.9,'CS3',transform=ax2.transAxes)
ax2.text(0.7,0.9,'CS4',transform=ax2.transAxes)
plotCSPos(ax2,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
plotCSPos(ax3,xc[1,60]/1000,xc[1,120]/1000,xc[1,240]/1000,xc[1,300]/1000)
#ax2.set_ylim(0,2.2)
#ax3.set_ylim(0,15)
ax1.legend(ncol=2,bbox_to_anchor=(0.97,-0.3))
ax0.tick_params(axis='x', pad=1)
ax1.tick_params(axis='x', pad=1)
ax3.tick_params(axis='x', pad=1)
ax4.tick_params(axis='x', pad=1)
ax0.tick_params(axis='y', pad=3)
ax1.tick_params(axis='y', pad=3)
ax2.tick_params(axis='y', pad=3)
ax3.tick_params(axis='y', pad=3)
ax4.tick_params(axis='y', pad=3)
###Output
_____no_output_____ |
parseORsyntax_tree/WikiEntity_Neymar.ipynb | ###Markdown
We will refer to the wikipedia page of Neymar Jr. (https://en.wikipedia.org/wiki/Neymar)As it may be clear our first Entity (E1) will be :Neymar
###Code
e1 = ':Neymar'
###Output
_____no_output_____
###Markdown
The Second entity are the links of connected articles dbo:wikiPageWikiLink and we can get them for a given Wikipedia page through Sparql queryI'll use SPARQLWrapper and rdflib for this purpose. Refer to this Youtube tutorial [here](https://www.youtube.com/watch?v=zdaL6unnv7Y&ab_channel=Patimir)
###Code
from rdflib import Graph
from SPARQLWrapper import SPARQLWrapper, JSON, N3
from pprint import pprint
sparql = SPARQLWrapper('https://dbpedia.org/sparql')
sparql.setQuery('''
select ?linkedEntities
WHERE {
dbr:Neymar dbo:wikiPageWikiLink
?linkedEntities
.}
''')
sparql.setReturnFormat(JSON)
queries = sparql.query().convert()
# pprint(queries)
bindings = queries['results']['bindings']
e2 = []
t = [x['linkedEntities']['value'].split('/')[-1] for x in bindings if x['linkedEntities']['value'].split('/')[-1][0:4] != 'File']
for binding in bindings:
uri = binding['linkedEntities']['value']
entity = uri.split('/')[-1]
e2.append(entity)
len(e2)
len(t)
sparql = SPARQLWrapper('https://dbpedia.org/sparql')
sparql.setQuery('''
select ?linkedEntities
WHERE {
dbr:Neymar dbo:wikiPageWikiLink
?linkedEntities
.}
''')
sparql.setReturnFormat(N3)
queries = sparql.query().convert()
g = Graph()
g.parse(data=queries, format='n3')
# print(g.serialize(format='ttl').decode('u8'))
from openie import StanfordOpenIE
import wikipedia
###Output
_____no_output_____
###Markdown
Will try getting some text from wikipedia about Neymar and use a openie for seeing the relation
###Code
text = wikipedia.page("Neymar (Player)").content[:50000]
count = 0
x = []
with StanfordOpenIE() as client:
for triple in client.annotate(text):
count += 1
if triple['subject']=='Neymar':
ob = triple['object'].split(' ')
ob2 = ob[0]
c = 0
for kk in ob:
if c==0:
c+=1
continue
c+=1
ob2 += '_'+kk
if ob2 in e2:
print("#######################################################")
print(triple)
print("#######################################################")
x.append(triple)
###Output
Starting server with command: java -Xmx8G -cp /home/gray/.stanfordnlp_resources/stanford-corenlp-4.1.0/* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 60000 -threads 5 -maxCharLength 100000 -quiet True -serverProperties corenlp_server-6bd8d9879e8845d1.props -preload openie
#######################################################
{'subject': 'Neymar', 'relation': 'joined', 'object': 'Santos FC'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'met', 'object': 'Paulo Henrique Ganso'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'aged came behind', 'object': "Andrés D'Alessandro"}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'came third behind', 'object': "Andrés D'Alessandro"}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'came behind', 'object': "Andrés D'Alessandro"}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'aged came third behind', 'object': "Andrés D'Alessandro"}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'finished', 'object': '2012 Campeonato Paulista'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'leave before', 'object': '2014 FIFA World Cup'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'was presented record turnout at', 'object': 'Camp Nou'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'miss', 'object': '2015 UEFA Super Cup'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'was shortlisted for', 'object': "2015 FIFA Ballon d'Or"}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'scored in', 'object': '2017 Copa del Rey Final'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'scored in', 'object': '2019 Coupe de France Final'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'was named for', 'object': '2014 FIFA World Cup'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'was kneed by', 'object': 'Juan Camilo Zúñiga'}
#######################################################
#######################################################
{'subject': 'Neymar', 'relation': 'came like', 'object': 'Pelé'}
#######################################################
###Markdown
So we went through an extract of 50000 chaaracters due to session limit and my hardware limitationsAnd somewhat entities from the list from SPARQL queries were obtained in these 50000 characters.
###Code
print(e2)
###Output
['2013_FIFA_Confederations_Cup', '2013_FIFA_Confederations_Cup_Final', '2013_Santos_FC_season', '2013_Supercopa_de_España', '2013–14_FC_Barcelona_season', '2013–14_La_Liga', '2013–14_UEFA_Champions_League', '2013–14_UEFA_Champions_League_group_stage', '2014_FIFA_World_Cup', '2014_FIFA_World_Cup_awards', '2014_FIFA_World_Cup_knockout_stage', '2014–15_Copa_del_Rey', '2014–15_FC_Barcelona_season', '2014–15_La_Liga', '2014–15_UEFA_Champions_League', '2015_Copa_América', '2015_Copa_del_Rey_Final', '2015_FIFA_Club_World_Cup', '2015_Supercopa_de_España', '2015_UEFA_Champions_League_Final', '2015_UEFA_Super_Cup', '2015–16_Copa_del_Rey', '2015–16_FC_Barcelona_season', '2015–16_La_Liga', '2015–16_UEFA_Champions_League', '2016_Copa_del_Rey_Final', '2016_Summer_Olympics', '2016–17_Copa_del_Rey', '2016–17_FC_Barcelona_season', '2016–17_UEFA_Champions_League', '2016–17_UEFA_Champions_League_knockout_phase', '2017_Copa_del_Rey_Final', '2017–18_Coupe_de_France', '2017–18_Coupe_de_la_Ligue', '2017–18_Ligue_1', '2017–18_Paris_Saint-Germain_F.C._season', '2017–18_UEFA_Champions_League_group_stage', 'Category:2011_Copa_América_players', 'Category:2013_FIFA_Confederations_Cup_players', 'Category:2014_FIFA_World_Cup_players', 'Category:2015_Copa_América_players', 'Category:2018_FIFA_World_Cup_players', 'Luis_Enrique', 'Luis_Suárez', 'Luiz_Felipe_Scolari', 'Luka_Modrić', 'MC_Guimê', 'Madrid', 'Unilever', "United_States_men's_national_soccer_team", 'União_Agrícola_Barbarense_Futebol_Clube', 'Usain_Bolt', 'Venezuela_national_football_team', 'DIS_Esporte', 'Hat-trick', "Football_at_the_2016_Summer_Olympics_–_Men's_team_squads", "Forbes'_list_of_the_world's_highest-paid_athletes", 'Prêmio_Craque_do_Brasileirão', 'UNFP_Player_of_the_Month', 'List_of_most-followed_Instagram_accounts', 'Category:African-Brazilian_sportspeople', 'Borussia_Dortmund', 'Botafogo_Futebol_Clube_(SP)', 'Brazil_national_football_team', 'Brazil_national_under-17_football_team', 'Brazil_national_under-20_football_team', 'Brazil_national_under-23_football_team', 'Brazil_v_Germany_(2014_FIFA_World_Cup)', 'Brazilian_Football_Confederation', 'Brazilian_real', 'Category:La_Liga_players', 'Category:Ligue_1_players', 'Category:Medalists_at_the_2012_Summer_Olympics', 'Category:Medalists_at_the_2016_Summer_Olympics', 'Category:Olympic_medalists_in_football', 'Deportivo_Alavés', 'Didier_Drogba', 'Diego_(footballer,_born_1985)', 'Diego_Maradona', 'Goiânia', 'Granada_CF', 'Great_Britain_Olympic_football_team', 'Guarani_FC', 'Guaratinguetá_Futebol', 'Konami', 'Krestovsky_Stadium', 'Pro_Evolution_Soccer_2012', 'Pro_Evolution_Soccer_2013', 'Puma_(brand)', 'Qatar', 'Qatar_national_football_team', 'RB_Leipzig', 'RC_Lens', 'RC_Strasbourg_Alsace', 'Rajamangala_Stadium', 'Overtime_(sports)', 'Youth_system', 'Placar', 'Category:1992_births', 'Category:Living_people', 'Alex_(footballer,_born_1982)', 'Alexandre_Pato', 'Alexis_Sánchez', 'Allianz_Arena', '2010_South_American_Footballer_of_the_Year', '2011_South_American_Footballer_of_the_Year', '2012_South_American_Footballer_of_the_Year', '2012_Summer_Olympics', '2013_Campeonato_Paulista', 'Celtic_F.C.', 'Chelsea_F.C.', "Football_at_the_2012_Summer_Olympics_–_Men's_tournament", "Football_at_the_2016_Summer_Olympics_–_Men's_tournament", 'Football_at_the_Summer_Olympics', 'Nike,_Inc.', 'Nike_Hypervenom', 'O_Globo', 'Oeste_Futebol_Clube', 'FIFA_(video_game_series)', 'Fouls_and_misconduct_(association_football)', 'South_American_Footballer_of_the_Year', 'FIFA_Puskás_Award', 'Rainbow_kick', 'Viral_video', 'Fifth_metatarsal_bone', 'Futsal', 'Category:Campeonato_Brasileiro_Série_A_players', 'Austria_national_football_team', 'BBC', 'BBC_Sport', 'Balada_(song)', 'Banco_Santander', 'List_of_most_expensive_association_football_transfers', 'Free_kick_(association_football)', 'List_of_2014_FIFA_World_Cup_controversies', 'Cameroon_national_football_team', 'Camp_Nou', 'Campeonato_Brasileiro_Série_A', 'Carlos_Bacca', 'Category:Olympic_gold_medalists_for_Brazil', 'Category:Olympic_silver_medalists_for_Brazil', 'Chile_national_football_team', 'China_national_football_team', 'Christianity', 'Claro_(company)', 'Clodoaldo', 'Clube_de_Regatas_do_Flamengo', 'Category:People_from_Mogi_das_Cruzes', 'Category:South_American_Youth_Championship_players', 'Gareth_Bale', 'Gerard_Piqué', 'Gisele_Bündchen', 'Jesus', 'Johannesburg', 'National_Union_of_Professional_Footballers', 'La_Liga_Awards', 'Beats_Electronics', 'Belarus_national_under-23_football_team', 'Belgium_national_football_team', 'Belo_Horizonte', 'Bicycle_kick', 'Category:Footballers_at_the_2012_Summer_Olympics', 'Category:Footballers_at_the_2016_Summer_Olympics', 'Kaká', 'Kashiwa_Reysol', 'Lechia_Gdańsk', 'Levante_UD', 'Ligue_1', 'Lionel_Messi', 'List_of_Spanish_football_champions', 'Temuco', 'Zico_(footballer)', 'Zinedine_Zidane', 'Álvaro_González_(footballer,_born_1990)', 'Ángel_Di_María', 'İstanbul_Başakşehir_F.K.', 'Estádio_Serra_Dourada', 'Música_sertaneja', 'Sponsor_(commercial)', 'International_Federation_of_Football_History_&_Statistics', 'Sibling_relationship', '2006–07_UEFA_Champions_League', '2009_Santos_FC_season', '2010_Campeonato_Brasileiro_Série_A', '2010_Copa_do_Brasil', '2010_FIFA_World_Cup', '2010_Santos_FC_season', '2011_Campeonato_Brasileiro_Série_A', '2011_Campeonato_Paulista', '2011_Copa_América', '2011_Copa_América_Group_B', '2011_Copa_Libertadores', '2011_Copa_Libertadores_Finals', '2011_FIFA_Club_World_Cup', '2011_Santos_FC_season', '2011_South_American_U-20_Championship', '2012_Campeonato_Brasileiro_Série_A', '2012_Campeonato_Paulista', '2012_Campeonato_Paulista_knockout_stage', '2012_Copa_Libertadores', '2012_Recopa_Sudamericana', '2012_Santos_FC_season', '2012_Superclásico_de_las_Américas', 'Michel_Teló', 'Midfielder', 'Miroslav_Stoch', 'Mogi_Mirim_Esporte_Clube', 'Mogi_das_Cruzes', 'Cerro_Porteño', 'Changing_room', 'Ligue_de_Football_Professionnel', 'Music_video', 'France_Football', '2018_FIFA_World_Cup', '2018_Trophée_des_Champions', '2018–19_Ligue_1', '2018–19_Paris_Saint-Germain_F.C._season', '2018–19_UEFA_Champions_League', '2019_Copa_América', '2019_Coupe_de_France_Final', '2019–20_Coupe_de_France', '2019–20_Coupe_de_la_Ligue', '2019–20_Ligue_1', '2019–20_Paris_Saint-Germain_F.C._season', '2019–20_UEFA_Champions_League', '2019–20_UEFA_Champions_League_group_stage', '2019–20_UEFA_Champions_League_knockout_phase', '2020_Coupe_de_France_Final', '2020_Coupe_de_la_Ligue_Final', '2020_UEFA_Champions_League_Final', '2020–21_Paris_Saint-Germain_F.C._season', '2022_FIFA_World_Cup_qualification_(CONMEBOL)', "2015_FIFA_Ballon_d'Or", 'Away_goals_rule', 'Social_media', 'Sociedade_Esportiva_Palmeiras', 'South_Africa_national_football_team', 'South_American_Youth_Football_Championship', 'Spain_national_football_team', 'Sport_Club_Corinthians_Paulista', 'Sport_Club_Internacional', 'Category:Brazil_international_footballers', 'Category:Brazil_under-20_international_footballers', 'Category:Brazil_youth_international_footballers', 'Category:Brazilian_Christians', 'Category:Brazilian_expatriate_footballers', 'Category:Brazilian_expatriate_sportspeople_in_France', 'Category:Brazilian_expatriate_sportspeople_in_Spain', 'Category:Brazilian_footballers', 'Category:Brazilian_people_of_European_descent', 'Dorival_Júnior', 'Double_(association_football)', 'Douglas_Costa', 'Dunga', 'EA_Sports', 'ESPN', 'East_Rutherford,_New_Jersey', 'Ecuador_national_football_team', 'Edinson_Cavani', 'Edu_Dracena', 'Forbes', 'France_national_football_team', 'Fred_(footballer,_born_1983)', 'Gusttavo_Lima', 'Honduras_national_under-23_football_team', 'Ibiza', 'Italy_national_football_team', 'Japan_national_football_team', 'Javier_Pastore', 'Sandro_(footballer,_born_1989)', 'Sandro_Rosell', 'Santiago', 'Santiago_Bernabéu_Stadium', 'Santos,_São_Paulo', 'Santos_FC', 'Scotland_national_football_team', 'Scottish_Football_Association', 'Senegal_national_football_team', 'Forward_(association_football)', 'Goal_celebration', 'Tax_evasion', 'Squad_number_(association_football)', 'Category:Association_football_forwards', 'Category:Association_football_wingers', 'Category:Associação_Atlética_Portuguesa_(Santos)_players', '2009_Campeonato_Paulista', '2010_Campeonato_Paulista', 'Ambev', 'Andoni_Iraola', 'André_Santos', "Andrés_D'Alessandro", 'Andrés_Iniesta', 'Anfield', 'Angers_SCO', 'Antônio_Wilson_Vieira_Honório', 'Argentina', 'Association_football', 'Association_football_tactics_and_skills', 'Associação_Atlética_Ponte_Preta', 'Associação_Atlética_Portuguesa_(Santos)', 'Atalanta_B.C.', 'Athletic_Bilbao', 'Atlético_Clube_Goianiense', 'Atlético_Madrid', 'Old_Trafford', 'Olympiastadion_(Berlin)', 'Olympique_Lyonnais', 'Olympique_de_Marseille', 'Oscar_(footballer,_born_1991)', 'Pablo_Armero', 'Panama_national_football_team', 'Panasonic', 'Buyout_clause', 'Rayo_Vallecano', 'Raí', 'Real_Madrid_CF', 'Real_Sociedad', 'Recife', 'Recopa_Sudamericana', 'Red_Star_Belgrade', 'The_Best_FIFA_Football_Awards_2017', 'The_Daily_Telegraph', 'Samba_Gold', 'AFC_Ajax', 'AS_Monaco_FC', 'AS_Saint-Étienne', 'Ai_Se_Eu_Te_Pego', 'Bobby_Ghosh', 'Bola_de_Ouro', 'Toulouse_FC', 'Toyota,_Aichi', 'Toyota_Stadium', 'Trophée_des_Champions', 'Trophées_UNFP_du_football', 'UEFA_Champions_League', 'UEFA_Team_of_the_Year', 'Dribbling', 'Treble_(association_football)', "L'Équipe", 'Street_football', 'Category:Expatriate_footballers_in_France', 'Category:Expatriate_footballers_in_Spain', 'Category:FC_Barcelona_players', 'Category:FIFA_Century_Club', 'Category:FIFA_Confederations_Cup-winning_players', 'CONMEBOL', 'COVID-19_pandemic', 'Cadena_SER', 'Category:Olympic_footballers_of_Brazil', 'Category:Paris_Saint-Germain_F.C._players', 'Category:Santos_FC_players', 'Category:UEFA_Champions_League_winning_players', 'Colombia_Olympic_football_team', 'Colombia_national_football_team', 'Confederation_of_African_Football', 'Copa_América_Centenario', 'Copa_Libertadores', 'Copa_del_Rey', 'Copa_do_Brasil', 'Coronavirus_disease_2019', 'Costa_Rica_national_football_team', 'Cristiano_Ronaldo', 'Croatia', 'Croatia_national_football_team', 'Cruzeiro_Esporte_Clube', 'Kylian_Mbappé', 'La_Liga', 'Le_Classique', 'Leandro_Damião', 'Leandro_Paredes', 'La_Liga_Player_of_the_Month', "List_of_men's_footballers_with_100_or_more_international_caps", "List_of_men's_footballers_with_50_or_more_international_goals", 'Dani_Alves', 'David_Beckham', 'David_Luiz', 'Assist_(association_football)', 'Paraguay_national_football_team', 'Parc_Olympique_Lyonnais', 'Paris_Saint-Germain_F.C.', 'Paulo_Henrique_Ganso', 'Pedro_(footballer,_born_1987)', 'Pelé', 'Penalty_kick_(association_football)', 'Pentecostalism', 'Pepe_(footballer,_born_1935)', 'Peru_national_football_team', 'Campeonato_Paulista', 'Riverside_Stadium', 'Roberto_Carlos', 'Roberto_Firmino', 'Robinho', 'Rogério_Micale', 'Romário', 'Ronaldinho', 'Ronaldo_(Brazilian_footballer)', 'Rory_McIlroy', 'Sergi_Roberto', 'Sevilla_FC', 'São_Bernardo_Futebol_Clube', 'São_Paulo', 'São_Paulo_(state)', 'São_Paulo_FC', 'São_Vicente,_São_Paulo', 'Coupe_de_France', 'Coupe_de_la_Ligue', 'Vicente_Calderón_Stadium', 'Victor_Andrade', 'Villarreal_CF', 'Vogue_(magazine)', 'Volkswagen', 'List_of_UEFA_Champions_League_hat-tricks', 'List_of_association_football_teams_to_have_won_four_or_more_trophies_in_one_season', 'Estádio_Nacional_Mané_Garrincha', 'Estádio_do_Arruda', 'Eu_Quero_Tchu,_Eu_Quero_Tcha', "European_Champion_Clubs'_Cup", 'European_Sports_Media', 'FC_Barcelona', 'FC_Barcelona_6–1_Paris_Saint-Germain_F.C.', 'FC_Bayern_Munich', 'FC_Girondins_de_Bordeaux', 'FIFA', 'FIFA_18', 'FIFA_19', "FIFA_Ballon_d'Or", 'FIFA_Club_World_Cup', 'FIFA_Confederations_Cup', 'FIFA_Confederations_Cup_records_and_statistics', 'FIFA_World_Cup_awards', 'FIFPro', 'Felipe_Anderson', 'Nasser_Al-Khelaifi', 'National_Stadium,_Singapore', 'Nenê_(footballer,_born_1981)', 'Peñarol', 'Portuguese_language', 'Pound_sterling', 'Premier_League', "St_James'_Park", 'Stade_Malherbe_Caen', 'Wayne_Rooney', 'Wembley_Stadium', 'West_Ham_United_F.C.', 'Westfalenstadion', 'Diving_(association_football)', 'Dummy_(football)', 'FIFA_World_Cup_Dream_Team', 'Spinal_fracture', 'Ebola', 'Ejection_(sports)', 'El_Clásico', 'FIFA_Club_World_Cup_awards', 'Ligue_1_Player_of_the_Year', 'Egypt_national_under-23_football_team', 'Elano', 'Elche_CF', 'Emirates_Stadium', 'En_Avant_Guingamp', 'Eric_Cantona', 'Esporte_Clube_Santo_André', 'Hiroki_Sakai', 'Money_Heist', "Monica's_Gang", 'Montevideo', 'Mumps', 'Jorge_Valdano', 'Josep_Maria_Bartomeu', 'João_Lucas_&_Marcelo', 'Juan_Bernat', 'Juan_Camilo_Zúñiga', 'Juan_Sebastián_Verón', 'Jundiaí', 'Juventus_F.C.', 'Jô', 'Matías_Alustiza', 'Mauricio_de_Sousa', 'Max_Meyer_(footballer)', 'Mexico_national_football_team', 'Mexico_national_under-23_football_team', 'Keepie_uppie', 'List_of_Paris_Saint-Germain_F.C._records_and_statistics', 'Manchester_United_F.C.', 'Mano_Menezes', 'Maracanã_Stadium', 'Thiago_Silva', 'Thibaut_Courtois', 'Thierry_Henry', 'Tim_Vickery', 'Time_(magazine)', 'Time_100', 'Tite_(football_manager)', 'Tithe', 'World_Soccer_(magazine)', 'XXX:_Return_of_Xander_Cage', 'Xavi', 'Penalty_shoot-out_(association_football)', 'Platonic_love', 'Playmaker', 'Supercopa_de_España', 'SportsPro', 'File:ECUADOR_vs_BRASIL_(29392285815)_(cropped).jpg', 'File:Messi_with_Neymar_Junior_the_Future_of_Brazil.jpg', 'File:Barcelona_fans_-_Champions_league_2015_Berlin.JPG', 'File:Bra-Cos_(13).jpg', 'File:Edgar_Ié_vs_Neymar.jpg', 'File:Neymar_vs_Lille.jpg', 'File:Neymargoldenball.jpg', 'File:Brasil_conquista_primeiro_ouro_olímpico_no_futebol_1039247-20082016-_mg_3424.jpg', 'File:Brasil_conquista_primeiro_ouro_olímpico_nos_penaltis_1039248-20082016-_mg_0015cropped.jpg', 'File:Brazil_at_the_2012_Olympics.jpg', 'File:Neymar.jpg', 'File:Neymar_(cropped).jpg', 'File:Neymar_2011.jpg', 'File:Neymar_Junior_the_Future_of_Brazil_2.jpg', 'File:Neymar_visiting_Red_Bull_Arena_(cropped).jpg', 'File:Neymar_Barcelona_presentation_1.jpg', 'File:Pique&neymar.jpg', 'File:André_Santos,_Neymar_and_Ramires_celebrate_Neymars_goal.jpg']
|
METABRIC.ipynb | ###Markdown
We will use PyCox to import the METABRIC Dataset
###Code
from pycox import datasets
df = datasets.metabric.read_df()
###Output
_____no_output_____
###Markdown
Preprocessing, setting Random folds and computing Event Quantiles of Interest
###Code
import numpy as np
dat1 = df[['x0', 'x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8']]
times = (df['duration'].values+1)
events = df['event'].values
data = dat1.to_numpy()
folds = np.array([1]*381 + [2]*381 + [3]*381 + [4]*381 + [5]*380 )
np.random.seed(0)
np.random.shuffle(folds)
quantiles = np.quantile(times[events==1], [0.25, .5, .75, .99]).tolist()
#This is a flag that is used to artificially increase the amount of censoring in the
#dataset to determine robustness of DSM to increased censoring levels.
INCREASE_CENSORING = False
from sklearn.preprocessing import StandardScaler
import importlib
import dsm
import dsm_utilites
importlib.reload(dsm)
importlib.reload(dsm_utilites)
float(len(dat1)*9)/10
#set parameter grid
params = [{'G':6, 'mlptyp':2,'HIDDEN':[100], 'n_iter':int(1000), 'lr':1e-3, 'ELBO':True, 'mean':False, \
'lambd':0, 'alpha':1,'thres':1e-3, 'bs':int(25), 'dist': 'Weibull'}]
#set val data size
vsize = int(0.15*1712)
torch.manual_seed(0)
for param in params:
outs = []
for f in range(1,6,1):
x_train = data[folds!=f]
x_test = data[folds==f]
x_valid = x_train[-vsize:, :]
x_train = x_train[:-vsize, :]
t_train = times[folds!=f]
t_test = times[folds==f]
t_valid = t_train[-vsize:]
t_train = t_train[:-vsize]
e_train = events[folds!=f]
e_test = events[folds==f]
e_valid = e_train[-vsize:]
e_train = e_train[:-vsize]
print ("val len:", len(x_valid))
print ("tr len:", len(x_train))
#normalize the feature set using standard scaling
scl = StandardScaler()
x_train = scl.fit_transform(x_train)
x_valid = scl.transform(x_valid)
x_test = scl.transform(x_test)
print ("Censoring in Fold:", np.mean(e_train))
if INCREASE_CENSORING:
e_train, t_train = increaseCensoring(e_train, t_train, .50)
print ("Censoring in Fold:", np.mean(e_train))
#Convert the train, test and validation data torch
x_train = torch.from_numpy(x_train).double()
e_train = torch.from_numpy(e_train).double()
t_train = torch.from_numpy(t_train).double()
x_valid = torch.from_numpy(x_valid).double()
e_valid = torch.from_numpy(e_valid).double()
t_valid = torch.from_numpy(t_valid).double()
x_test = torch.from_numpy(x_test).double()
e_test = torch.from_numpy(e_test).double()
t_test = torch.from_numpy(t_test).double()
K, mlptyp, HIDDEN, n_iter, lr, ELBO, mean, lambd, alpha, thres, bs,dist = \
param['G'], param['mlptyp'], param['HIDDEN'], param['n_iter'], param['lr'], \
param['ELBO'], param['mean'], param['lambd'], param['alpha'], param['thres'],\
param['bs'], param['dist']
D = x_train.shape[1]
print (dist)
model = dsm.DeepSurvivalMachines(D, K, mlptyp, HIDDEN, dist=dist)
model.double()
model, i = dsm_utilites.trainDSM(model,quantiles,x_train, t_train, e_train, x_valid, t_valid, e_valid,lr=lr,bs=bs,alpha=alpha )
print ("TEST PERFORMANCE")
out = (dsm_utilites.computeCIScores(model, quantiles, x_test, t_test, e_test, t_train, e_train))
print (out)
outs.append(out)
quantiles
model.dist
###Output
_____no_output_____ |
Set 1/exercise 2/askisi2.ipynb | ###Markdown
Στην άσκηση 2 μας δίνεται μια φωτογραφία και ζητείται να γίνει αφινικός μετασχηματισμός της σύμφωνα με το input του χρήστη. Η είσοδος είναι οι μεταβλητές a1-a6 του αφινικού μετασχηματισμού και διαμορφώνουν τον πίνακα μετασχηματισμού Ξεκινάμε με τα import που θα χρειαστούμε
###Code
from PIL import Image
import sys
import numpy as np
from IPython.display import display,Image as jupyter
###Output
_____no_output_____
###Markdown
Στη συνέχεια κατασκευάζουμε την κύρια συνάρτηση που κάνει τον μετασχηματισμό σύμφωνα με τα a1-a6
###Code
def transform(array, a1, a2, a3, a4, a5, a6):
rows = array.shape[0]
cols = array.shape[1]
#This new array will be the representation of the new picture
new_array = np.zeros((rows,cols))
for r in range(0,rows):
for c in range(0,cols):
x = r-(rows/2)
y= c-(cols/2)
xx = (a1*x)+(a2*y)+a3
yy = (a4*x)+(a5*y)+a6
rounded_new_rows = round(xx + (rows/2))
rounded_new_cols = round(yy + (cols/2))
if(rounded_new_rows>=0 and rounded_new_rows<rows and rounded_new_cols>=0 and rounded_new_cols<cols):
new_array[r][c]=array[rounded_new_rows][rounded_new_cols]
return new_array
###Output
_____no_output_____
###Markdown
Το input είναι στο argv[1]
###Code
#input image argv[1]
image_array = np.array(Image.open(sys.argv[1]))
###Output
_____no_output_____
###Markdown
Τα a1-a6:
###Code
#inputs: argv[3] to argv[8] = a1 to a6
a1 = float(sys.argv[3])
a2 = float(sys.argv[4])
a3 = float(sys.argv[5])
a4 = float(sys.argv[6])
a5 = float(sys.argv[7])
a6 = float(sys.argv[8])
###Output
_____no_output_____
###Markdown
Εκτελούμε τον μετασχηματισμό:
###Code
#Execute transform
new_array = transform(image_array, a1, a2, a3, a4, a5, a6)
#Create final image from modified array
final_image = Image.fromarray(new_array)
display(jupyter(filename=sys.argv[2]))
###Output
_____no_output_____
###Markdown
Αποθήκευση:
###Code
#Convert to png writable
final_image = final_image.convert("L")
#argv[2] is the output file
final_image.save(sys.argv[2])
###Output
_____no_output_____ |
docs/_downloads/5f81194dd43910d586578638f83205a3/dcgan_faces_tutorial.ipynb | ###Markdown
DCGAN Tutorial==============**Author**: `Nathan Inkawhich `__ Introduction------------This tutorial will give an introduction to DCGANs through an example. Wewill train a generative adversarial network (GAN) to generate newcelebrities after showing it pictures of many real celebrities. Most ofthe code here is from the dcgan implementation in`pytorch/examples `__, and thisdocument will give a thorough explanation of the implementation and shedlight on how and why this model works. But don’t worry, no priorknowledge of GANs is required, but it may require a first-timer to spendsome time reasoning about what is actually happening under the hood.Also, for the sake of time it will help to have a GPU, or two. Letsstart from the beginning.Generative Adversarial Networks-------------------------------What is a GAN?~~~~~~~~~~~~~~GANs are a framework for teaching a DL model to capture the trainingdata’s distribution so we can generate new data from that samedistribution. GANs were invented by Ian Goodfellow in 2014 and firstdescribed in the paper `Generative AdversarialNets `__.They are made of two distinct models, a *generator* and a*discriminator*. The job of the generator is to spawn ‘fake’ images thatlook like the training images. The job of the discriminator is to lookat an image and output whether or not it is a real training image or afake image from the generator. During training, the generator isconstantly trying to outsmart the discriminator by generating better andbetter fakes, while the discriminator is working to become a betterdetective and correctly classify the real and fake images. Theequilibrium of this game is when the generator is generating perfectfakes that look as if they came directly from the training data, and thediscriminator is left to always guess at 50% confidence that thegenerator output is real or fake.Now, lets define some notation to be used throughout tutorial startingwith the discriminator. Let $x$ be data representing an image.$D(x)$ is the discriminator network which outputs the (scalar)probability that $x$ came from training data rather than thegenerator. Here, since we are dealing with images the input to$D(x)$ is an image of CHW size 3x64x64. Intuitively, $D(x)$should be HIGH when $x$ comes from training data and LOW when$x$ comes from the generator. $D(x)$ can also be thought ofas a traditional binary classifier.For the generator’s notation, let $z$ be a latent space vectorsampled from a standard normal distribution. $G(z)$ represents thegenerator function which maps the latent vector $z$ to data-space.The goal of $G$ is to estimate the distribution that the trainingdata comes from ($p_{data}$) so it can generate fake samples fromthat estimated distribution ($p_g$).So, $D(G(z))$ is the probability (scalar) that the output of thegenerator $G$ is a real image. As described in `Goodfellow’spaper `__,$D$ and $G$ play a minimax game in which $D$ tries tomaximize the probability it correctly classifies reals and fakes($logD(x)$), and $G$ tries to minimize the probability that$D$ will predict its outputs are fake ($log(1-D(G(x)))$).From the paper, the GAN loss function is\begin{align}\underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big]\end{align}In theory, the solution to this minimax game is where$p_g = p_{data}$, and the discriminator guesses randomly if theinputs are real or fake. However, the convergence theory of GANs isstill being actively researched and in reality models do not alwaystrain to this point.What is a DCGAN?~~~~~~~~~~~~~~~~A DCGAN is a direct extension of the GAN described above, except that itexplicitly uses convolutional and convolutional-transpose layers in thediscriminator and generator, respectively. It was first described byRadford et. al. in the paper `Unsupervised Representation Learning WithDeep Convolutional Generative AdversarialNetworks `__. The discriminatoris made up of strided`convolution `__layers, `batchnorm `__layers, and`LeakyReLU `__activations. The input is a 3x64x64 input image and the output is ascalar probability that the input is from the real data distribution.The generator is comprised of`convolutional-transpose `__layers, batch norm layers, and`ReLU `__ activations. Theinput is a latent vector, $z$, that is drawn from a standardnormal distribution and the output is a 3x64x64 RGB image. The stridedconv-transpose layers allow the latent vector to be transformed into avolume with the same shape as an image. In the paper, the authors alsogive some tips about how to setup the optimizers, how to calculate theloss functions, and how to initialize the model weights, all of whichwill be explained in the coming sections.
###Code
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
###Output
_____no_output_____
###Markdown
Inputs------Let’s define some inputs for the run:- **dataroot** - the path to the root of the dataset folder. We will talk more about the dataset in the next section- **workers** - the number of worker threads for loading the data with the DataLoader- **batch_size** - the batch size used in training. The DCGAN paper uses a batch size of 128- **image_size** - the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See `here `__ for more details- **nc** - number of color channels in the input images. For color images this is 3- **nz** - length of latent vector- **ngf** - relates to the depth of feature maps carried through the generator- **ndf** - sets the depth of feature maps propagated through the discriminator- **num_epochs** - number of training epochs to run. Training for longer will probably lead to better results but will also take much longer- **lr** - learning rate for training. As described in the DCGAN paper, this number should be 0.0002- **beta1** - beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5- **ngpu** - number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs
###Code
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
###Output
_____no_output_____
###Markdown
Data----In this tutorial we will use the `Celeb-A Facesdataset `__ which canbe downloaded at the linked site, or in `GoogleDrive `__.The dataset will download as a file named *img_align_celeba.zip*. Oncedownloaded, create a directory named *celeba* and extract the zip fileinto that directory. Then, set the *dataroot* input for this notebook tothe *celeba* directory you just created. The resulting directorystructure should be::: /path/to/celeba -> img_align_celeba -> 188242.jpg -> 173822.jpg -> 284702.jpg -> 537394.jpg ...This is an important step because we will be using the ImageFolderdataset class, which requires there to be subdirectories in thedataset’s root folder. Now, we can create the dataset, create thedataloader, set the device to run on, and finally visualize some of thetraining data.
###Code
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
###Output
_____no_output_____
###Markdown
Implementation--------------With our input parameters set and the dataset prepared, we can now getinto the implementation. We will start with the weigth initializationstrategy, then talk about the generator, discriminator, loss functions,and training loop in detail.Weight Initialization~~~~~~~~~~~~~~~~~~~~~From the DCGAN paper, the authors specify that all model weights shallbe randomly initialized from a Normal distribution with mean=0,stdev=0.02. The ``weights_init`` function takes an initialized model asinput and reinitializes all convolutional, convolutional-transpose, andbatch normalization layers to meet this criteria. This function isapplied to the models immediately after initialization.
###Code
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
###Output
_____no_output_____
###Markdown
Generator~~~~~~~~~The generator, $G$, is designed to map the latent space vector($z$) to data-space. Since our data are images, converting$z$ to data-space means ultimately creating a RGB image with thesame size as the training images (i.e. 3x64x64). In practice, this isaccomplished through a series of strided two dimensional convolutionaltranspose layers, each paired with a 2d batch norm layer and a reluactivation. The output of the generator is fed through a tanh functionto return it to the input data range of $[-1,1]$. It is worthnoting the existence of the batch norm functions after theconv-transpose layers, as this is a critical contribution of the DCGANpaper. These layers help with the flow of gradients during training. Animage of the generator from the DCGAN paper is shown below... figure:: /_static/img/dcgan_generator.png :alt: dcgan_generatorNotice, the how the inputs we set in the input section (*nz*, *ngf*, and*nc*) influence the generator architecture in code. *nz* is the lengthof the z input vector, *ngf* relates to the size of the feature mapsthat are propagated through the generator, and *nc* is the number ofchannels in the output image (set to 3 for RGB images). Below is thecode for the generator.
###Code
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, we can instantiate the generator and apply the ``weights_init``function. Check out the printed model to see how the generator object isstructured.
###Code
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
###Output
_____no_output_____
###Markdown
Discriminator~~~~~~~~~~~~~As mentioned, the discriminator, $D$, is a binary classificationnetwork that takes an image as input and outputs a scalar probabilitythat the input image is real (as opposed to fake). Here, $D$ takesa 3x64x64 input image, processes it through a series of Conv2d,BatchNorm2d, and LeakyReLU layers, and outputs the final probabilitythrough a Sigmoid activation function. This architecture can be extendedwith more layers if necessary for the problem, but there is significanceto the use of the strided convolution, BatchNorm, and LeakyReLUs. TheDCGAN paper mentions it is a good practice to use strided convolutionrather than pooling to downsample because it lets the network learn itsown pooling function. Also batch norm and leaky relu functions promotehealthy gradient flow which is critical for the learning process of both$G$ and $D$. Discriminator Code
###Code
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, as with the generator, we can create the discriminator, apply the``weights_init`` function, and print the model’s structure.
###Code
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
###Output
_____no_output_____
###Markdown
Loss Functions and Optimizers~~~~~~~~~~~~~~~~~~~~~~~~~~~~~With $D$ and $G$ setup, we can specify how they learnthrough the loss functions and optimizers. We will use the Binary CrossEntropy loss(`BCELoss `__)function which is defined in PyTorch as:\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}Notice how this function provides the calculation of both log componentsin the objective function (i.e. $log(D(x))$ and$log(1-D(G(z)))$). We can specify what part of the BCE equation touse with the $y$ input. This is accomplished in the training loopwhich is coming up soon, but it is important to understand how we canchoose which component we wish to calculate just by changing $y$(i.e. GT labels).Next, we define our real label as 1 and the fake label as 0. Theselabels will be used when calculating the losses of $D$ and$G$, and this is also the convention used in the original GANpaper. Finally, we set up two separate optimizers, one for $D$ andone for $G$. As specified in the DCGAN paper, both are Adamoptimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping trackof the generator’s learning progression, we will generate a fixed batchof latent vectors that are drawn from a Gaussian distribution(i.e. fixed_noise) . In the training loop, we will periodically inputthis fixed_noise into $G$, and over the iterations we will seeimages form out of the noise.
###Code
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1.
fake_label = 0.
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
###Output
_____no_output_____
###Markdown
Training~~~~~~~~Finally, now that we have all of the parts of the GAN framework defined,we can train it. Be mindful that training GANs is somewhat of an artform, as incorrect hyperparameter settings lead to mode collapse withlittle explanation of what went wrong. Here, we will closely followAlgorithm 1 from Goodfellow’s paper, while abiding by some of the bestpractices shown in `ganhacks `__.Namely, we will “construct different mini-batches for real and fake”images, and also adjust G’s objective function to maximize$logD(G(z))$. Training is split up into two main parts. Part 1updates the Discriminator and Part 2 updates the Generator.**Part 1 - Train the Discriminator**Recall, the goal of training the discriminator is to maximize theprobability of correctly classifying a given input as real or fake. Interms of Goodfellow, we wish to “update the discriminator by ascendingits stochastic gradient”. Practically, we want to maximize$log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batchsuggestion from ganhacks, we will calculate this in two steps. First, wewill construct a batch of real samples from the training set, forwardpass through $D$, calculate the loss ($log(D(x))$), thencalculate the gradients in a backward pass. Secondly, we will constructa batch of fake samples with the current generator, forward pass thisbatch through $D$, calculate the loss ($log(1-D(G(z)))$),and *accumulate* the gradients with a backward pass. Now, with thegradients accumulated from both the all-real and all-fake batches, wecall a step of the Discriminator’s optimizer.**Part 2 - Train the Generator**As stated in the original paper, we want to train the Generator byminimizing $log(1-D(G(z)))$ in an effort to generate better fakes.As mentioned, this was shown by Goodfellow to not provide sufficientgradients, especially early in the learning process. As a fix, weinstead wish to maximize $log(D(G(z)))$. In the code we accomplishthis by: classifying the Generator output from Part 1 with theDiscriminator, computing G’s loss *using real labels as GT*, computingG’s gradients in a backward pass, and finally updating G’s parameterswith an optimizer step. It may seem counter-intuitive to use the reallabels as GT labels for the loss function, but this allows us to use the$log(x)$ part of the BCELoss (rather than the $log(1-x)$part) which is exactly what we want.Finally, we will do some statistic reporting and at the end of eachepoch we will push our fixed_noise batch through the generator tovisually track the progress of G’s training. The training statisticsreported are:- **Loss_D** - discriminator loss calculated as the sum of losses for the all real and all fake batches ($log(D(x)) + log(D(G(z)))$).- **Loss_G** - generator loss calculated as $log(D(G(z)))$- **D(x)** - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Think about why this is.- **D(G(z))** - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Think about why this is.**Note:** This step might take a while, depending on how many epochs yourun and if you removed some data from the dataset.
###Code
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
###Output
_____no_output_____
###Markdown
Results-------Finally, lets check out how we did. Here, we will look at threedifferent results. First, we will see how D and G’s losses changedduring training. Second, we will visualize G’s output on the fixed_noisebatch for every epoch. And third, we will look at a batch of real datanext to a batch of fake data from G.**Loss versus training iteration**Below is a plot of D & G’s losses versus training iterations.
###Code
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Visualization of G’s progression**Remember how we saved the generator’s output on the fixed_noise batchafter every epoch of training. Now, we can visualize the trainingprogression of G with an animation. Press the play button to start theanimation.
###Code
#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
**Real Images vs. Fake Images**Finally, lets take a look at some real images and fake images side byside.
###Code
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
###Output
_____no_output_____
###Markdown
DCGAN Tutorial==============**Author**: `Nathan Inkawhich `__ Introduction------------This tutorial will give an introduction to DCGANs through an example. Wewill train a generative adversarial network (GAN) to generate newcelebrities after showing it pictures of many real celebrities. Most ofthe code here is from the dcgan implementation in`pytorch/examples `__, and thisdocument will give a thorough explanation of the implementation and shedlight on how and why this model works. But don’t worry, no priorknowledge of GANs is required, but it may require a first-timer to spendsome time reasoning about what is actually happening under the hood.Also, for the sake of time it will help to have a GPU, or two. Letsstart from the beginning.Generative Adversarial Networks-------------------------------What is a GAN?~~~~~~~~~~~~~~GANs are a framework for teaching a DL model to capture the trainingdata’s distribution so we can generate new data from that samedistribution. GANs were invented by Ian Goodfellow in 2014 and firstdescribed in the paper `Generative AdversarialNets `__.They are made of two distinct models, a *generator* and a*discriminator*. The job of the generator is to spawn ‘fake’ images thatlook like the training images. The job of the discriminator is to lookat an image and output whether or not it is a real training image or afake image from the generator. During training, the generator isconstantly trying to outsmart the discriminator by generating better andbetter fakes, while the discriminator is working to become a betterdetective and correctly classify the real and fake images. Theequilibrium of this game is when the generator is generating perfectfakes that look as if they came directly from the training data, and thediscriminator is left to always guess at 50% confidence that thegenerator output is real or fake.Now, lets define some notation to be used throughout tutorial startingwith the discriminator. Let $x$ be data representing an image.$D(x)$ is the discriminator network which outputs the (scalar)probability that $x$ came from training data rather than thegenerator. Here, since we are dealing with images the input to$D(x)$ is an image of CHW size 3x64x64. Intuitively, $D(x)$should be HIGH when $x$ comes from training data and LOW when$x$ comes from the generator. $D(x)$ can also be thought ofas a traditional binary classifier.For the generator’s notation, let $z$ be a latent space vectorsampled from a standard normal distribution. $G(z)$ represents thegenerator function which maps the latent vector $z$ to data-space.The goal of $G$ is to estimate the distribution that the trainingdata comes from ($p_{data}$) so it can generate fake samples fromthat estimated distribution ($p_g$).So, $D(G(z))$ is the probability (scalar) that the output of thegenerator $G$ is a real image. As described in `Goodfellow’spaper `__,$D$ and $G$ play a minimax game in which $D$ tries tomaximize the probability it correctly classifies reals and fakes($logD(x)$), and $G$ tries to minimize the probability that$D$ will predict its outputs are fake ($log(1-D(G(x)))$).From the paper, the GAN loss function is\begin{align}\underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big]\end{align}In theory, the solution to this minimax game is where$p_g = p_{data}$, and the discriminator guesses randomly if theinputs are real or fake. However, the convergence theory of GANs isstill being actively researched and in reality models do not alwaystrain to this point.What is a DCGAN?~~~~~~~~~~~~~~~~A DCGAN is a direct extension of the GAN described above, except that itexplicitly uses convolutional and convolutional-transpose layers in thediscriminator and generator, respectively. It was first described byRadford et. al. in the paper `Unsupervised Representation Learning WithDeep Convolutional Generative AdversarialNetworks `__. The discriminatoris made up of strided`convolution `__layers, `batchnorm `__layers, and`LeakyReLU `__activations. The input is a 3x64x64 input image and the output is ascalar probability that the input is from the real data distribution.The generator is comprised of`convolutional-transpose `__layers, batch norm layers, and`ReLU `__ activations. Theinput is a latent vector, $z$, that is drawn from a standardnormal distribution and the output is a 3x64x64 RGB image. The stridedconv-transpose layers allow the latent vector to be transformed into avolume with the same shape as an image. In the paper, the authors alsogive some tips about how to setup the optimizers, how to calculate theloss functions, and how to initialize the model weights, all of whichwill be explained in the coming sections.
###Code
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
###Output
_____no_output_____
###Markdown
Inputs------Let’s define some inputs for the run:- **dataroot** - the path to the root of the dataset folder. We will talk more about the dataset in the next section- **workers** - the number of worker threads for loading the data with the DataLoader- **batch_size** - the batch size used in training. The DCGAN paper uses a batch size of 128- **image_size** - the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See `here `__ for more details- **nc** - number of color channels in the input images. For color images this is 3- **nz** - length of latent vector- **ngf** - relates to the depth of feature maps carried through the generator- **ndf** - sets the depth of feature maps propagated through the discriminator- **num_epochs** - number of training epochs to run. Training for longer will probably lead to better results but will also take much longer- **lr** - learning rate for training. As described in the DCGAN paper, this number should be 0.0002- **beta1** - beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5- **ngpu** - number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs
###Code
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
###Output
_____no_output_____
###Markdown
Data----In this tutorial we will use the `Celeb-A Facesdataset `__ which canbe downloaded at the linked site, or in `GoogleDrive `__.The dataset will download as a file named *img_align_celeba.zip*. Oncedownloaded, create a directory named *celeba* and extract the zip fileinto that directory. Then, set the *dataroot* input for this notebook tothe *celeba* directory you just created. The resulting directorystructure should be::: /path/to/celeba -> img_align_celeba -> 188242.jpg -> 173822.jpg -> 284702.jpg -> 537394.jpg ...This is an important step because we will be using the ImageFolderdataset class, which requires there to be subdirectories in thedataset’s root folder. Now, we can create the dataset, create thedataloader, set the device to run on, and finally visualize some of thetraining data.
###Code
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
###Output
_____no_output_____
###Markdown
Implementation--------------With our input parameters set and the dataset prepared, we can now getinto the implementation. We will start with the weigth initializationstrategy, then talk about the generator, discriminator, loss functions,and training loop in detail.Weight Initialization~~~~~~~~~~~~~~~~~~~~~From the DCGAN paper, the authors specify that all model weights shallbe randomly initialized from a Normal distribution with mean=0,stdev=0.02. The ``weights_init`` function takes an initialized model asinput and reinitializes all convolutional, convolutional-transpose, andbatch normalization layers to meet this criteria. This function isapplied to the models immediately after initialization.
###Code
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
###Output
_____no_output_____
###Markdown
Generator~~~~~~~~~The generator, $G$, is designed to map the latent space vector($z$) to data-space. Since our data are images, converting$z$ to data-space means ultimately creating a RGB image with thesame size as the training images (i.e. 3x64x64). In practice, this isaccomplished through a series of strided two dimensional convolutionaltranspose layers, each paired with a 2d batch norm layer and a reluactivation. The output of the generator is fed through a tanh functionto return it to the input data range of $[-1,1]$. It is worthnoting the existence of the batch norm functions after theconv-transpose layers, as this is a critical contribution of the DCGANpaper. These layers help with the flow of gradients during training. Animage of the generator from the DCGAN paper is shown below... figure:: /_static/img/dcgan_generator.png :alt: dcgan_generatorNotice, the how the inputs we set in the input section (*nz*, *ngf*, and*nc*) influence the generator architecture in code. *nz* is the lengthof the z input vector, *ngf* relates to the size of the feature mapsthat are propagated through the generator, and *nc* is the number ofchannels in the output image (set to 3 for RGB images). Below is thecode for the generator.
###Code
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, we can instantiate the generator and apply the ``weights_init``function. Check out the printed model to see how the generator object isstructured.
###Code
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
###Output
_____no_output_____
###Markdown
Discriminator~~~~~~~~~~~~~As mentioned, the discriminator, $D$, is a binary classificationnetwork that takes an image as input and outputs a scalar probabilitythat the input image is real (as opposed to fake). Here, $D$ takesa 3x64x64 input image, processes it through a series of Conv2d,BatchNorm2d, and LeakyReLU layers, and outputs the final probabilitythrough a Sigmoid activation function. This architecture can be extendedwith more layers if necessary for the problem, but there is significanceto the use of the strided convolution, BatchNorm, and LeakyReLUs. TheDCGAN paper mentions it is a good practice to use strided convolutionrather than pooling to downsample because it lets the network learn itsown pooling function. Also batch norm and leaky relu functions promotehealthy gradient flow which is critical for the learning process of both$G$ and $D$. Discriminator Code
###Code
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, as with the generator, we can create the discriminator, apply the``weights_init`` function, and print the model’s structure.
###Code
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
###Output
_____no_output_____
###Markdown
Loss Functions and Optimizers~~~~~~~~~~~~~~~~~~~~~~~~~~~~~With $D$ and $G$ setup, we can specify how they learnthrough the loss functions and optimizers. We will use the Binary CrossEntropy loss(`BCELoss `__)function which is defined in PyTorch as:\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}Notice how this function provides the calculation of both log componentsin the objective function (i.e. $log(D(x))$ and$log(1-D(G(z)))$). We can specify what part of the BCE equation touse with the $y$ input. This is accomplished in the training loopwhich is coming up soon, but it is important to understand how we canchoose which component we wish to calculate just by changing $y$(i.e. GT labels).Next, we define our real label as 1 and the fake label as 0. Theselabels will be used when calculating the losses of $D$ and$G$, and this is also the convention used in the original GANpaper. Finally, we set up two separate optimizers, one for $D$ andone for $G$. As specified in the DCGAN paper, both are Adamoptimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping trackof the generator’s learning progression, we will generate a fixed batchof latent vectors that are drawn from a Gaussian distribution(i.e. fixed_noise) . In the training loop, we will periodically inputthis fixed_noise into $G$, and over the iterations we will seeimages form out of the noise.
###Code
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
###Output
_____no_output_____
###Markdown
Training~~~~~~~~Finally, now that we have all of the parts of the GAN framework defined,we can train it. Be mindful that training GANs is somewhat of an artform, as incorrect hyperparameter settings lead to mode collapse withlittle explanation of what went wrong. Here, we will closely followAlgorithm 1 from Goodfellow’s paper, while abiding by some of the bestpractices shown in `ganhacks `__.Namely, we will “construct different mini-batches for real and fake”images, and also adjust G’s objective function to maximize$logD(G(z))$. Training is split up into two main parts. Part 1updates the Discriminator and Part 2 updates the Generator.**Part 1 - Train the Discriminator**Recall, the goal of training the discriminator is to maximize theprobability of correctly classifying a given input as real or fake. Interms of Goodfellow, we wish to “update the discriminator by ascendingits stochastic gradient”. Practically, we want to maximize$log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batchsuggestion from ganhacks, we will calculate this in two steps. First, wewill construct a batch of real samples from the training set, forwardpass through $D$, calculate the loss ($log(D(x))$), thencalculate the gradients in a backward pass. Secondly, we will constructa batch of fake samples with the current generator, forward pass thisbatch through $D$, calculate the loss ($log(1-D(G(z)))$),and *accumulate* the gradients with a backward pass. Now, with thegradients accumulated from both the all-real and all-fake batches, wecall a step of the Discriminator’s optimizer.**Part 2 - Train the Generator**As stated in the original paper, we want to train the Generator byminimizing $log(1-D(G(z)))$ in an effort to generate better fakes.As mentioned, this was shown by Goodfellow to not provide sufficientgradients, especially early in the learning process. As a fix, weinstead wish to maximize $log(D(G(z)))$. In the code we accomplishthis by: classifying the Generator output from Part 1 with theDiscriminator, computing G’s loss *using real labels as GT*, computingG’s gradients in a backward pass, and finally updating G’s parameterswith an optimizer step. It may seem counter-intuitive to use the reallabels as GT labels for the loss function, but this allows us to use the$log(x)$ part of the BCELoss (rather than the $log(1-x)$part) which is exactly what we want.Finally, we will do some statistic reporting and at the end of eachepoch we will push our fixed_noise batch through the generator tovisually track the progress of G’s training. The training statisticsreported are:- **Loss_D** - discriminator loss calculated as the sum of losses for the all real and all fake batches ($log(D(x)) + log(D(G(z)))$).- **Loss_G** - generator loss calculated as $log(D(G(z)))$- **D(x)** - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Think about why this is.- **D(G(z))** - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Think about why this is.**Note:** This step might take a while, depending on how many epochs yourun and if you removed some data from the dataset.
###Code
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
###Output
_____no_output_____
###Markdown
Results-------Finally, lets check out how we did. Here, we will look at threedifferent results. First, we will see how D and G’s losses changedduring training. Second, we will visualize G’s output on the fixed_noisebatch for every epoch. And third, we will look at a batch of real datanext to a batch of fake data from G.**Loss versus training iteration**Below is a plot of D & G’s losses versus training iterations.
###Code
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Visualization of G’s progression**Remember how we saved the generator’s output on the fixed_noise batchafter every epoch of training. Now, we can visualize the trainingprogression of G with an animation. Press the play button to start theanimation.
###Code
#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
**Real Images vs. Fake Images**Finally, lets take a look at some real images and fake images side byside.
###Code
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
###Output
_____no_output_____
###Markdown
DCGAN Tutorial==============**Author**: `Nathan Inkawhich `__ Introduction------------This tutorial will give an introduction to DCGANs through an example. Wewill train a generative adversarial network (GAN) to generate newcelebrities after showing it pictures of many real celebrities. Most ofthe code here is from the dcgan implementation in`pytorch/examples `__, and thisdocument will give a thorough explanation of the implementation and shedlight on how and why this model works. But don’t worry, no priorknowledge of GANs is required, but it may require a first-timer to spendsome time reasoning about what is actually happening under the hood.Also, for the sake of time it will help to have a GPU, or two. Letsstart from the beginning.Generative Adversarial Networks-------------------------------What is a GAN?~~~~~~~~~~~~~~GANs are a framework for teaching a DL model to capture the trainingdata’s distribution so we can generate new data from that samedistribution. GANs were invented by Ian Goodfellow in 2014 and firstdescribed in the paper `Generative AdversarialNets `__.They are made of two distinct models, a *generator* and a*discriminator*. The job of the generator is to spawn ‘fake’ images thatlook like the training images. The job of the discriminator is to lookat an image and output whether or not it is a real training image or afake image from the generator. During training, the generator isconstantly trying to outsmart the discriminator by generating better andbetter fakes, while the discriminator is working to become a betterdetective and correctly classify the real and fake images. Theequilibrium of this game is when the generator is generating perfectfakes that look as if they came directly from the training data, and thediscriminator is left to always guess at 50% confidence that thegenerator output is real or fake.Now, lets define some notation to be used throughout tutorial startingwith the discriminator. Let $x$ be data representing an image.$D(x)$ is the discriminator network which outputs the (scalar)probability that $x$ came from training data rather than thegenerator. Here, since we are dealing with images the input to$D(x)$ is an image of CHW size 3x64x64. Intuitively, $D(x)$should be HIGH when $x$ comes from training data and LOW when$x$ comes from the generator. $D(x)$ can also be thought ofas a traditional binary classifier.For the generator’s notation, let $z$ be a latent space vectorsampled from a standard normal distribution. $G(z)$ represents thegenerator function which maps the latent vector $z$ to data-space.The goal of $G$ is to estimate the distribution that the trainingdata comes from ($p_{data}$) so it can generate fake samples fromthat estimated distribution ($p_g$).So, $D(G(z))$ is the probability (scalar) that the output of thegenerator $G$ is a real image. As described in `Goodfellow’spaper `__,$D$ and $G$ play a minimax game in which $D$ tries tomaximize the probability it correctly classifies reals and fakes($logD(x)$), and $G$ tries to minimize the probability that$D$ will predict its outputs are fake ($log(1-D(G(x)))$).From the paper, the GAN loss function is\begin{align}\underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big]\end{align}In theory, the solution to this minimax game is where$p_g = p_{data}$, and the discriminator guesses randomly if theinputs are real or fake. However, the convergence theory of GANs isstill being actively researched and in reality models do not alwaystrain to this point.What is a DCGAN?~~~~~~~~~~~~~~~~A DCGAN is a direct extension of the GAN described above, except that itexplicitly uses convolutional and convolutional-transpose layers in thediscriminator and generator, respectively. It was first described byRadford et. al. in the paper `Unsupervised Representation Learning WithDeep Convolutional Generative AdversarialNetworks `__. The discriminatoris made up of strided`convolution `__layers, `batchnorm `__layers, and`LeakyReLU `__activations. The input is a 3x64x64 input image and the output is ascalar probability that the input is from the real data distribution.The generator is comprised of`convolutional-transpose `__layers, batch norm layers, and`ReLU `__ activations. Theinput is a latent vector, $z$, that is drawn from a standardnormal distribution and the output is a 3x64x64 RGB image. The stridedconv-transpose layers allow the latent vector to be transformed into avolume with the same shape as an image. In the paper, the authors alsogive some tips about how to setup the optimizers, how to calculate theloss functions, and how to initialize the model weights, all of whichwill be explained in the coming sections.
###Code
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seem for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
###Output
_____no_output_____
###Markdown
Inputs------Let’s define some inputs for the run:- **dataroot** - the path to the root of the dataset folder. We will talk more about the dataset in the next section- **workers** - the number of worker threads for loading the data with the DataLoader- **batch_size** - the batch size used in training. The DCGAN paper uses a batch size of 128- **image_size** - the spatial size of the images used for training. This implementation defaults to 64x64. If another size is desired, the structures of D and G must be changed. See `here `__ for more details- **nc** - number of color channels in the input images. For color images this is 3- **nz** - length of latent vector- **ngf** - relates to the depth of feature maps carried through the generator- **ndf** - sets the depth of feature maps propagated through the discriminator- **num_epochs** - number of training epochs to run. Training for longer will probably lead to better results but will also take much longer- **lr** - learning rate for training. As described in the DCGAN paper, this number should be 0.0002- **beta1** - beta1 hyperparameter for Adam optimizers. As described in paper, this number should be 0.5- **ngpu** - number of GPUs available. If this is 0, code will run in CPU mode. If this number is greater than 0 it will run on that number of GPUs
###Code
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
###Output
_____no_output_____
###Markdown
Data----In this tutorial we will use the `Celeb-A Facesdataset `__ which canbe downloaded at the linked site, or in `GoogleDrive `__.The dataset will download as a file named *img_align_celeba.zip*. Oncedownloaded, create a directory named *celeba* and extract the zip fileinto that directory. Then, set the *dataroot* input for this notebook tothe *celeba* directory you just created. The resulting directorystructure should be::: /path/to/celeba -> img_align_celeba -> 188242.jpg -> 173822.jpg -> 284702.jpg -> 537394.jpg ...This is an important step because we will be using the ImageFolderdataset class, which requires there to be subdirectories in thedataset’s root folder. Now, we can create the dataset, create thedataloader, set the device to run on, and finally visualize some of thetraining data.
###Code
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))
###Output
_____no_output_____
###Markdown
Implementation--------------With our input parameters set and the dataset prepared, we can now getinto the implementation. We will start with the weigth initializationstrategy, then talk about the generator, discriminator, loss functions,and training loop in detail.Weight Initialization~~~~~~~~~~~~~~~~~~~~~From the DCGAN paper, the authors specify that all model weights shallbe randomly initialized from a Normal distribution with mean=0,stdev=0.02. The ``weights_init`` function takes an initialized model asinput and reinitializes all convolutional, convolutional-transpose, andbatch normalization layers to meet this criteria. This function isapplied to the models immediately after initialization.
###Code
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
###Output
_____no_output_____
###Markdown
Generator~~~~~~~~~The generator, $G$, is designed to map the latent space vector($z$) to data-space. Since our data are images, converting$z$ to data-space means ultimately creating a RGB image with thesame size as the training images (i.e. 3x64x64). In practice, this isaccomplished through a series of strided two dimensional convolutionaltranspose layers, each paired with a 2d batch norm layer and a reluactivation. The output of the generator is fed through a tanh functionto return it to the input data range of $[-1,1]$. It is worthnoting the existence of the batch norm functions after theconv-transpose layers, as this is a critical contribution of the DCGANpaper. These layers help with the flow of gradients during training. Animage of the generator from the DCGAN paper is shown below... figure:: /_static/img/dcgan_generator.png :alt: dcgan_generatorNotice, the how the inputs we set in the input section (*nz*, *ngf*, and*nc*) influence the generator architecture in code. *nz* is the lengthof the z input vector, *ngf* relates to the size of the feature mapsthat are propagated through the generator, and *nc* is the number ofchannels in the output image (set to 3 for RGB images). Below is thecode for the generator.
###Code
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, we can instantiate the generator and apply the ``weights_init``function. Check out the printed model to see how the generator object isstructured.
###Code
# Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
# Print the model
print(netG)
###Output
_____no_output_____
###Markdown
Discriminator~~~~~~~~~~~~~As mentioned, the discriminator, $D$, is a binary classificationnetwork that takes an image as input and outputs a scalar probabilitythat the input image is real (as opposed to fake). Here, $D$ takesa 3x64x64 input image, processes it through a series of Conv2d,BatchNorm2d, and LeakyReLU layers, and outputs the final probabilitythrough a Sigmoid activation function. This architecture can be extendedwith more layers if necessary for the problem, but there is significanceto the use of the strided convolution, BatchNorm, and LeakyReLUs. TheDCGAN paper mentions it is a good practice to use strided convolutionrather than pooling to downsample because it lets the network learn itsown pooling function. Also batch norm and leaky relu functions promotehealthy gradient flow which is critical for the learning process of both$G$ and $D$. Discriminator Code
###Code
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
###Output
_____no_output_____
###Markdown
Now, as with the generator, we can create the discriminator, apply the``weights_init`` function, and print the model’s structure.
###Code
# Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
# Print the model
print(netD)
###Output
_____no_output_____
###Markdown
Loss Functions and Optimizers~~~~~~~~~~~~~~~~~~~~~~~~~~~~~With $D$ and $G$ setup, we can specify how they learnthrough the loss functions and optimizers. We will use the Binary CrossEntropy loss(`BCELoss `__)function which is defined in PyTorch as:\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}Notice how this function provides the calculation of both log componentsin the objective function (i.e. $log(D(x))$ and$log(1-D(G(z)))$). We can specify what part of the BCE equation touse with the $y$ input. This is accomplished in the training loopwhich is coming up soon, but it is important to understand how we canchoose which component we wish to calculate just by changing $y$(i.e. GT labels).Next, we define our real label as 1 and the fake label as 0. Theselabels will be used when calculating the losses of $D$ and$G$, and this is also the convention used in the original GANpaper. Finally, we set up two separate optimizers, one for $D$ andone for $G$. As specified in the DCGAN paper, both are Adamoptimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping trackof the generator’s learning progression, we will generate a fixed batchof latent vectors that are drawn from a Gaussian distribution(i.e. fixed_noise) . In the training loop, we will periodically inputthis fixed_noise into $G$, and over the iterations we will seeimages form out of the noise.
###Code
# Initialize BCELoss function
criterion = nn.BCELoss()
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
###Output
_____no_output_____
###Markdown
Training~~~~~~~~Finally, now that we have all of the parts of the GAN framework defined,we can train it. Be mindful that training GANs is somewhat of an artform, as incorrect hyperparameter settings lead to mode collapse withlittle explanation of what went wrong. Here, we will closely followAlgorithm 1 from Goodfellow’s paper, while abiding by some of the bestpractices shown in `ganhacks `__.Namely, we will “construct different mini-batches for real and fake”images, and also adjust G’s objective function to maximize$logD(G(z))$. Training is split up into two main parts. Part 1updates the Discriminator and Part 2 updates the Generator.**Part 1 - Train the Discriminator**Recall, the goal of training the discriminator is to maximize theprobability of correctly classifying a given input as real or fake. Interms of Goodfellow, we wish to “update the discriminator by ascendingits stochastic gradient”. Practically, we want to maximize$log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batchsuggestion from ganhacks, we will calculate this in two steps. First, wewill construct a batch of real samples from the training set, forwardpass through $D$, calculate the loss ($log(D(x))$), thencalculate the gradients in a backward pass. Secondly, we will constructa batch of fake samples with the current generator, forward pass thisbatch through $D$, calculate the loss ($log(1-D(G(z)))$),and *accumulate* the gradients with a backward pass. Now, with thegradients accumulated from both the all-real and all-fake batches, wecall a step of the Discriminator’s optimizer.**Part 2 - Train the Generator**As stated in the original paper, we want to train the Generator byminimizing $log(1-D(G(z)))$ in an effort to generate better fakes.As mentioned, this was shown by Goodfellow to not provide sufficientgradients, especially early in the learning process. As a fix, weinstead wish to maximize $log(D(G(z)))$. In the code we accomplishthis by: classifying the Generator output from Part 1 with theDiscriminator, computing G’s loss *using real labels as GT*, computingG’s gradients in a backward pass, and finally updating G’s parameterswith an optimizer step. It may seem counter-intuitive to use the reallabels as GT labels for the loss function, but this allows us to use the$log(x)$ part of the BCELoss (rather than the $log(1-x)$part) which is exactly what we want.Finally, we will do some statistic reporting and at the end of eachepoch we will push our fixed_noise batch through the generator tovisually track the progress of G’s training. The training statisticsreported are:- **Loss_D** - discriminator loss calculated as the sum of losses for the all real and all fake batches ($log(D(x)) + log(D(G(z)))$).- **Loss_G** - generator loss calculated as $log(D(G(z)))$- **D(x)** - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Think about why this is.- **D(G(z))** - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Think about why this is.**Note:** This step might take a while, depending on how many epochs yourun and if you removed some data from the dataset.
###Code
# Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
errG = criterion(output, label)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
###Output
_____no_output_____
###Markdown
Results-------Finally, lets check out how we did. Here, we will look at threedifferent results. First, we will see how D and G’s losses changedduring training. Second, we will visualize G’s output on the fixed_noisebatch for every epoch. And third, we will look at a batch of real datanext to a batch of fake data from G.**Loss versus training iteration**Below is a plot of D & G’s losses versus training iterations.
###Code
plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Visualization of G’s progression**Remember how we saved the generator’s output on the fixed_noise batchafter every epoch of training. Now, we can visualize the trainingprogression of G with an animation. Press the play button to start theanimation.
###Code
#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
**Real Images vs. Fake Images**Finally, lets take a look at some real images and fake images side byside.
###Code
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()
###Output
_____no_output_____ |
analyses/4-descriptive/descriptive.ipynb | ###Markdown
Descriptive statistics for network datasets
###Code
from pathlib import Path
import joblib
## CUSTOM FUNCTIONS FOR RUNNING THIS NOTEBOOK
from latex import to_latex
HERE = Path(".").absolute()
DATA = HERE/"data"
data = joblib.load(DATA/"descriptive-statistics.pkl.gz")
###Output
_____no_output_____
###Markdown
Social and biological networksNetworks used in "Structural coefficients in real networks" section.
###Code
dataset = "domains"
data.xs(dataset, level="group")
# print(to_latex(data, dataset))
###Output
_____no_output_____
###Markdown
Ugandan village networks
###Code
dataset = "social"
data.xs(dataset, level="group")
# print(to_latex(data, dataset))
###Output
_____no_output_____ |
003 - Machine Learing/RII/Ensembles_Text_Classifier(GradientBoosting).ipynb | ###Markdown
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
###Code
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
twenty_train.target_names
print(len(twenty_train.data))
print(len(twenty_train.filenames))
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
X_train_counts.shape
count_vect.vocabulary_.get(u'algorithm')
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
X_train_tf.shape
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier()
clf.fit(X_train_tfidf, twenty_train.target)
#docs_new = ['God is not GPU', 'OpenGL on the GPU is love']
docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)
for doc, category in zip(docs_new, predicted):
print('%r => %s' % (doc, twenty_train.target_names[category]))
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', GradientBoostingClassifier()),
])
text_clf.fit(twenty_train.data, twenty_train.target)
import numpy as np
twenty_test = fetch_20newsgroups(subset='test', categories=categories, shuffle=True, random_state=42)
docs_test = twenty_test.data
predicted = text_clf.predict(docs_test)
np.mean(predicted == twenty_test.target)
from sklearn import metrics
print(metrics.classification_report(twenty_test.target, predicted, target_names=twenty_test.target_names))
metrics.confusion_matrix(twenty_test.target, predicted)
def matrix_confusao(y_test, y_pred, labels):
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm.T, square=True, annot=True, fmt='d', cbar=True,
xticklabels=labels, yticklabels=labels, cmap='YlGnBu')
plt.xlabel('Valores Reais')
plt.ylabel('Valores Previstos')
matrix_confusao(twenty_test.target, predicted, twenty_test.target_names)
###Output
_____no_output_____ |
intro-to-python-ml/week2/Intro-to-Scikit-Learn.ipynb | ###Markdown
Introduction to Scikit-Learn Scikit-Learn is a well known package that provides access to many common machine learning algorithms through a consistent, well-organized Application Programming Interface (API) and is supported by very thorough and comprehensive documentation.The uniform syntax and the consistency in how the API is designed means that once you learn one model, it is surprisingly easy to pick up additional models. Lets take an example to familiarise with scikit To prepare our appetite for what's to come, we will take a quick look at coffee prices near the North Shore of Oahu, Hawaii. Our goal will be to predict the price of a cup of coffee, given a cup size.These prices come from several coffee shops in the area, in 2019.|Size (oz)|Price ($)||----|----||12|2.95||16|3.65||20|4.15||14|3.25||18|4.20||20|4.00| Prep the data Let's look at the data in a simple scatter plot to compare the cost of coffee versus the size of the cup. We start with a set of standard imports...
###Code
import matplotlib.pyplot as plt
import numpy as np
# NOTE: during the Choose the Model step, we will import the
# model we want, but there is no reason you can't import it here.
# from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Prep the training and test data **The training data**:We start off by making two `numpy` arrays.
###Code
x_train = np.array([12, 16, 20, 14, 18, 20]) # Coffee cup sizes
y_train = np.array([2.95, 3.65, 4.15, 3.25, 4.20, 4.00]) # Coffee prices
###Output
_____no_output_____
###Markdown
Then we plot them using a `matplotlib` scatter plot.
###Code
plt.scatter(x_train, y_train);
###Output
_____no_output_____
###Markdown
In order to put this data into a linear regression machine learning algorithm, we need to create our features matrix, which includes just our coffee sizes (`x_train` values).In this case, we will use one of the `numpy` techniques to increase the dimensionality of the `x_train` array. We will discuss this process in greater detail in a few minutes.```X_train = x_train[:, np.newaxis]```We will call our training set: `X_train` (with an upper case `X`).
###Code
x_train.shape
X_train = x_train.reshape(6,1) # creates an array of arrays
X_train.shape
###Output
_____no_output_____
###Markdown
Our target values are generally labeled `y_train` (with a lower case `y`) and these values can be a simple array.
###Code
y_train
###Output
_____no_output_____
###Markdown
**Now, the test data**: We need to have some test data to see what values the model will predict. Let's presume that some friends will be coming to the North Shore of Oahu and want to buy some coffee in various sizes, include some potentially unusual sizes.Based on their requests, we prep several cup sizes to see what price the model will predict.We generate a set of `x_test` values (representing size in oz.) in an array. Then we convert the array to a 2D matrix for inclusion as an argument when we get to the prediction phase. As noted above, we will discuss this in detail shortly.
###Code
x_test = np.array([16, 15, 12, 20, 17])
X_test = x_test[:, None] # None will accomplish the same
X_test # outcome as np.newaxis
###Output
_____no_output_____
###Markdown
Choose the Model For this quick example, we are gonna import a simple **linear regression** model from the sklearn collection of linear models.
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Choose Appropriate Hyperparameters This model comes, as do most of the models in sklearn with arguments (or hyperparameters) set to sane defaults, so for this case, we won't add or change any arguments.**NOTE**: When Jupyter evaluates a model, it displays a string representation of that model with the current settings for the model, including any defaults.
###Code
model = LinearRegression()
model
###Output
_____no_output_____
###Markdown
Fit the model With a prepared model, we need to feed it data to evaluate. For this linear regression model, we give it two arguments: `X` and `y`.
###Code
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
With these inputs, the model was able to calculate the **slope** (coefficient) and the **y-intercept** of the line that aligns most closely with our training data.Let's look at both of these calculated results.```pythonmodel.coef_model.intercept_```**NOTE**: scikit-learn appends an `_` to the end of attributes that return **calculated** values. It does this to help distinguish between inputs and outputs
###Code
model.coef_
model.intercept_
###Output
_____no_output_____
###Markdown
Apply the model
###Code
y_pred = model.predict(X_test)
y_pred
# reminder, these were the test cup sizes:
# [16, 15, 12, 20, 17]
###Output
_____no_output_____
###Markdown
Examine the Results From here, we can plot all of the data points together on one chart:* original values in purple* predicted values in red* predicted slope of the line that best fits the original training data
###Code
plt.scatter(x_train, y_train, color='rebeccapurple')
plt.scatter(x_test, y_pred, color='red', alpha=0.20)
plt.scatter(x_train, y_train, color='rebeccapurple')
plt.plot(x_test, y_pred, color='red');
###Output
_____no_output_____
###Markdown
Introduction to Scikit-Learn Scikit-Learn is a well known package that provides access to many common machine learning algorithms through a consistent, well-organized Application Programming Interface (API) and is supported by very thorough and comprehensive documentation.The uniform syntax and the consistency in how the API is designed means that once you learn one model, it is surprisingly easy to pick up additional models. Lets take an example to familiarise with scikit To prepare our appetite for what's to come, we will take a quick look at coffee prices near the North Shore of Oahu, Hawaii. Our goal will be to predict the price of a cup of coffee, given a cup size.These prices come from several coffee shops in the area, in 2019.|Size (oz)|Price ($)||----|----||12|2.95||16|3.65||20|4.15||14|3.25||18|4.20||20|4.00| Prep the data Let's look at the data in a simple scatter plot to compare the cost of coffee versus the size of the cup. We start with a set of standard imports...
###Code
import matplotlib.pyplot as plt
import numpy as np
# NOTE: during the Choose the Model step, we will import the
# model we want, but there is no reason you can't import it here.
# from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Prep the training and test data **The training data**:We start off by making two `numpy` arrays.
###Code
x_train = np.array([12, 16, 20, 14, 18, 20]) # Coffee cup sizes
y_train = np.array([2.95, 3.65, 4.15, 3.25, 4.20, 4.00]) # Coffee prices
###Output
_____no_output_____
###Markdown
Then we plot them using a `matplotlib` scatter plot.
###Code
plt.scatter(x_train, y_train);
###Output
_____no_output_____
###Markdown
In order to put this data into a linear regression machine learning algorithm, we need to create our features matrix, which includes just our coffee sizes (`x_train` values).In this case, we will use one of the `numpy` techniques to increase the dimensionality of the `x_train` array. We will discuss this process in greater detail in a few minutes.```X_train = x_train[:, np.newaxis]```We will call our training set: `X_train` (with an upper case `X`).
###Code
x_train.shape
X_train = x_train.reshape(6,1) # creates an array of arrays
X_train.shape
###Output
_____no_output_____
###Markdown
Our target values are generally labeled `y_train` (with a lower case `y`) and these values can be a simple array.
###Code
y_train
###Output
_____no_output_____
###Markdown
**Now, the test data**: We need to have some test data to see what values the model will predict. Let's presume that some friends will be coming to the North Shore of Oahu and want to buy some coffee in various sizes, include some potentially unusual sizes.Based on their requests, we prep several cup sizes to see what price the model will predict.We generate a set of `x_test` values (representing size in oz.) in an array. Then we convert the array to a 2D matrix for inclusion as an argument when we get to the prediction phase. As noted above, we will discuss this in detail shortly.
###Code
x_test = np.array([16, 15, 12, 20, 17])
X_test = x_test[:, None] # None will accomplish the same
X_test # outcome as np.newaxis
###Output
_____no_output_____
###Markdown
Choose the Model For this quick example, we are gonna import a simple **linear regression** model from the sklearn collection of linear models.
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Choose Appropriate Hyperparameters This model comes, as do most of the models in sklearn with arguments (or hyperparameters) set to sane defaults, so for this case, we won't add or change any arguments.**NOTE**: When Jupyter evaluates a model, it displays a string representation of that model with the current settings for the model, including any defaults.
###Code
model = LinearRegression()
model
###Output
_____no_output_____
###Markdown
Fit the model With a prepared model, we need to feed it data to evaluate. For this linear regression model, we give it two arguments: `X` and `y`.
###Code
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
With these inputs, the model was able to calculate the **slope** (coefficient) and the **y-intercept** of the line that aligns most closely with our training data.Let's look at both of these calculated results.```pythonmodel.coef_model.intercept_```**NOTE**: scikit-learn appends an `_` to the end of attributes that return **calculated** values. It does this to help distinguish between inputs and outputs
###Code
model.coef_
model.intercept_
###Output
_____no_output_____
###Markdown
Apply the model
###Code
y_pred = model.predict(X_test)
y_pred
# reminder, these were the test cup sizes:
# [16, 15, 12, 20, 17]
###Output
_____no_output_____
###Markdown
Examine the Results From here, we can plot all of the data points together on one chart:* original values in purple* predicted values in red* predicted slope of the line that best fits the original training data
###Code
plt.scatter(x_train, y_train, color='rebeccapurple')
plt.scatter(x_test, y_pred, color='red', alpha=0.20)
plt.scatter(x_train, y_train, color='rebeccapurple')
plt.plot(x_test, y_pred, color='red');
###Output
_____no_output_____ |
ML-Base-MOOC/chapt-7 Logistic Regression/03-Logistic Regression(3).ipynb | ###Markdown
Logistic Regression(3)
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.normal(0, 1, size=(200, 2))
y = np.array(X[:, 0]**2 + X[:, 1]**2 < 1.5, dtype='int')
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
_____no_output_____
###Markdown
1. 使用逻辑回归
###Code
from LogisticReg.LogisticRegression import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
log_reg.score(X, y)
###Output
_____no_output_____
###Markdown
2. 使用多项式
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression())
])
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X, y)
poly_log_reg.score(X, y)
###Output
_____no_output_____
###Markdown
3. 使用正则化(scikit-learn) **scikit-learn中使用的正则化方式:**$$C \cdot J(\theta) + L_1$$$$C \cdot J(\theta) + L_2$$
###Code
np.random.seed(666)
X = np.random.normal(0, 1, size=(200, 2))
y = np.array(X[:, 0]**2 + X[:,1] < 1.5, dtype='int')
for _ in range(20):
y[np.random.randint(200)] = 1
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train, y_train)
log_reg.score(X_train, y_train)
log_reg.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
scikit-learn中多项式逻辑回归
###Code
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(solver='lbfgs'))
])
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X_train, y_train)
poly_log_reg.score(X_train, y_train)
poly_log_reg.score(X_test, y_test)
poly_log_reg2 = PolynomialLogisticRegression(degree=20)
poly_log_reg2.fit(X_train, y_train)
poly_log_reg2.score(X_train, y_train)
poly_log_reg2.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
- 当 degree = 20 时,训练集分数提高,但是预测数据集分数下降,说明你模型泛化能力下降- 即模型发生了过拟合 加入超参数 C 对模型正则化
###Code
# C: 分类准确度,损失函数前面的系数
def PolynomialLogisticRegression(degree, C):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(solver='lbfgs', C=C))
])
poly_log_reg3 = PolynomialLogisticRegression(degree=20, C=0.1)
poly_log_reg3.fit(X_train, y_train)
poly_log_reg3.score(X_train, y_train)
poly_log_reg3.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Logistic Regression(3)
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.normal(0, 1, size=(200, 2))
y = np.array(X[:, 0]**2 + X[:, 1]**2 < 1.5, dtype='int')
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
_____no_output_____
###Markdown
1. 使用逻辑回归
###Code
from LogisticReg.LogisticRegression import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
log_reg.score(X, y)
def plot_decision_boundary(model, axis):
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1] - axis[0])*100)).reshape(1, -1),
np.linspace(axis[2], axis[3], int((axis[3] - axis[2])*100)).reshape(-1, 1)
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predic = model.predict(X_new)
zz = y_predic.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A', '#FFF590', '#90CAF9'])
plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap)
###Output
_____no_output_____
###Markdown
**绘制决策边界**
###Code
plot_decision_boundary(log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
/home/js/pyEnvs/tf_cpu/lib/python3.6/site-packages/ipykernel_launcher.py:15: UserWarning: The following kwargs were not used by contour: 'linewidth'
from ipykernel import kernelapp as app
###Markdown
2. 使用多项式
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression())
])
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X, y)
poly_log_reg.score(X, y)
plot_decision_boundary(poly_log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
/home/js/pyEnvs/tf_cpu/lib/python3.6/site-packages/ipykernel_launcher.py:15: UserWarning: The following kwargs were not used by contour: 'linewidth'
from ipykernel import kernelapp as app
###Markdown
3. 使用正则化(scikit-learn) **scikit-learn中使用的正则化方式:**$$C \cdot J(\theta) + L_1$$$$C \cdot J(\theta) + L_2$$
###Code
np.random.seed(666)
X = np.random.normal(0, 1, size=(200, 2))
y = np.array(X[:, 0]**2 + X[:,1] < 1.5, dtype='int')
# 为样本添加噪音
for _ in range(20):
y[np.random.randint(200)] = 1
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train, y_train)
log_reg.score(X_train, y_train)
log_reg.score(X_test, y_test)
plot_decision_boundary(log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
/home/js/pyEnvs/tf_cpu/lib/python3.6/site-packages/ipykernel_launcher.py:15: UserWarning: The following kwargs were not used by contour: 'linewidth'
from ipykernel import kernelapp as app
###Markdown
scikit-learn中多项式逻辑回归
###Code
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(solver='lbfgs'))
])
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X_train, y_train)
poly_log_reg.score(X_train, y_train)
poly_log_reg.score(X_test, y_test)
plot_decision_boundary(poly_log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
###Output
/home/js/pyEnvs/tf_cpu/lib/python3.6/site-packages/ipykernel_launcher.py:15: UserWarning: The following kwargs were not used by contour: 'linewidth'
from ipykernel import kernelapp as app
###Markdown
**增加多项式**
###Code
poly_log_reg2 = PolynomialLogisticRegression(degree=30)
poly_log_reg2.fit(X_train, y_train)
plot_decision_boundary(poly_log_reg2, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
poly_log_reg2.score(X_train, y_train)
poly_log_reg2.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
- 当 degree = 20 时,训练集分数提高,但是预测数据集分数下降,说明你模型泛化能力下降- 即模型发生了过拟合 加入超参数 C 对模型正则化- 减小损失函数的影响,增大正则化的影响
###Code
# C: 分类准确度,损失函数前面的系数
def PolynomialLogisticRegression(degree, C):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(solver='lbfgs', C=C))
])
poly_log_reg3 = PolynomialLogisticRegression(degree=20, C=0.1)
poly_log_reg3.fit(X_train, y_train)
plot_decision_boundary(poly_log_reg3, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0, 0], X[y==0, 1])
plt.scatter(X[y==1, 0], X[y==1, 1])
poly_log_reg3.score(X_train, y_train)
poly_log_reg3.score(X_test, y_test)
###Output
_____no_output_____ |
notebooks/andras/movie-random-01.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import pandas as pd
import urllib.request
from urllib.parse import urlparse
import warnings
warnings.simplefilter(action='ignore')
###Output
_____no_output_____
###Markdown
Load Data
###Code
df_train = pd.read_parquet('../input/movie-posters-2/df_train-split_v1.gzip')
df_val = pd.read_parquet('../input/movie-posters-2/df_eval-split_v1.gzip')
df_test = pd.read_parquet('../input/movie-posters-2/df_test-split_v1.gzip')
def get_path_from_url(url):
if url is None:
return None
return url.split('/')[-1]
df_train['filename'] = df_train['poster_url'].apply(get_path_from_url)
df_val['filename'] = df_val['poster_url'].apply(get_path_from_url)
df_test['filename'] = df_test['poster_url'].apply(get_path_from_url)
display(df_train.head(2))
display(df_val.head(2))
display(df_test.head(2))
df_train.shape, df_val.shape, df_test.shape
###Output
_____no_output_____
###Markdown
Create Image generators for train, validate and test
###Code
IMAGES_DIR = '../input/movie-posters-2/data'
label_cols = [col for col in df_train.columns if col.startswith('x0_')]
datagen = ImageDataGenerator()
train_generator = datagen.flow_from_dataframe(
dataframe=df_train,
directory=IMAGES_DIR,
x_col="filename",
y_col=label_cols,
batch_size=64,
shuffle=True,
class_mode="raw",
target_size=(299, 299),
)
valid_generator = datagen.flow_from_dataframe(
dataframe=df_val,
directory=IMAGES_DIR,
x_col="filename",
y_col=label_cols,
batch_size=64,
class_mode="raw",
target_size=(299, 299)
)
test_generator = datagen.flow_from_dataframe(
dataframe=df_test,
directory=IMAGES_DIR,
x_col="filename",
y_col=label_cols,
batch_size=64,
class_mode="raw",
target_size=(299, 299),
validate_filenames=True
)
###Output
Found 11764 validated image filenames.
Found 2942 validated image filenames.
Found 3677 validated image filenames.
###Markdown
Create Random Model
###Code
m1 = keras.Sequential()
m1.add(
layers.Lambda(lambda x: tf.random.uniform((1,19), minval=0.0, maxval=1.0), input_shape=(299,299,3))
)
m1.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
m1.evaluate(test_generator)
###Output
_____no_output_____ |
exams/2021-06-28/solutions/exam-2021-06-28-sol.ipynb | ###Markdown
Esame Lun 28, Giu 2021 B**Seminari Python - Triennale Sociologia @Università di Trento** [Scarica](../../../_static/generated/sps-2021-06-28-exam.zip) esercizi e soluzioni B1 Game of ThronesApri con Pandas il file [game-of-thrones.csv](game-of-thrones.csv) che contiene gli episodi in varie annate. - usa l'encoding `UTF-8` B1.1) Ti viene fornito un dizionario `preferiti` con gli episodi preferiti di un gruppo di persone, che però non si ricordano esattamente i vari titoli che sono quindi spesso incompleti: Seleziona gli episodi preferiti da Paolo e Chiara- assumi che la capitalizzazione in `preferiti` sia quella corretta- **NOTA**: il dataset contiene insidiose doppie virgolette `"` attorno ai titoli, ma se scrivi il codice nel modo giusto questo non dovrebbe essere un problema
###Code
import pandas as pd
import numpy as np # importiamo numpy e per comodità lo rinominiamo in 'np'
preferiti = {
"Paolo" : 'Winter Is',
"Chiara" : 'Wolf and the Lion',
"Anselmo" : 'Fire and',
"Letizia" : 'Garden of'
}
# scrivi qui
df = pd.read_csv('game-of-thrones.csv', encoding='UTF-8')
titolidf = df[ (df["Title"].str.contains(preferiti['Paolo'])) | (df["Title"].str.contains(preferiti['Chiara']))]
titolidf
#jupman-purge
jupman.draw_df(titolidf)
#/jupman-purge
###Output
_____no_output_____
###Markdown
B1.2) Seleziona tutti gli episodi che sono stati mandati per la prima volta in onda in un certo `anno` (colonna `Original air date`)- **NOTA**: `anno` ti viene fornito come `int`
###Code
anno = 17
# scrivi qui
annidf = df[ df['Original air date'].str[-2:] == str(anno) ]
annidf
#jupman-purge
jupman.draw_df(annidf)
#/jupman-purge
###Output
_____no_output_____
###Markdown
B2 Punti di interesse universiadiScrivi una funzione che dato il file [punti-interesse.csv](punti-interesse.csv) dei punti di interesse di Trento individuati per le Universiadi 2013, RITORNA una lista ordinata e senza duplicati con tutti i nomi che trovi nella colonna `CATEGORIA`. Sorgente dati: [dati.trentino.it](https://dati.trentino.it/dataset/poi-trento)- **USA un csv.reader e l'encoding** `latin-1`- non includere categorie vuote nel risultato- alcune categorie sono in realtà più di una divise da trattino, separale in categorie distinte: Esempi: - `Banca- Bancomat-Cambiovaluta` - `Centro commerciale-Grande magazzino`
###Code
import csv
def cercat(file_csv):
#jupman-raise
with open(file_csv, encoding='latin-1', newline='') as f:
lettore = csv.reader(f, delimiter=',')
next(lettore)
ret = set()
for riga in lettore:
for elem in riga[3].split('-'):
if elem.strip() != '':
ret.add(elem.strip())
return sorted(ret)
#/jupman-raise
risultato = cercat('punti-interesse.csv')
print(risultato)
atteso = ['Affitta Camere', 'Agriturismo', 'Alimentari', 'Appartamento Vacanze',
'Autostazione', 'Banca', 'Bancomat', 'Bar', 'Bed & Breakfast', 'Biblioteca',
'Birreria', 'Bus Navetta', 'Cambiovaluta', 'Camping', 'Centro Wellness',
'Centro commerciale', 'Corrieri', 'Discoteca', 'Editoria', 'Farmacia', 'Funivia',
'Gelateria', 'Grande magazzino', 'Hotel', 'Istituzioni', 'Mercatini', 'Mercato',
'Monumento', 'Museo', 'Noleggio Sci', 'Numeri utili', 'Parcheggio', 'Pasticceria',
'Piscina', 'Posta', 'Prodotti tipici', 'Pub', 'Residence', 'Rifugio', 'Ristorante',
'Scuola Sci', 'Sede Trentino Trasporti', 'Snow Park', 'Souvenir', 'Sport', 'Stadio',
'Stadio del ghiaccio', 'Stazione dei Treni', 'Taxi', 'Teatro', 'Ufficio informazioni turistiche']
#TEST
print()
for i in range(len(atteso)):
if risultato[i] != atteso[i]:
print("ERRORE ALL'ELEMENTO %s:" % i)
print(' ATTESO:', atteso[i])
print(' TROVATO:', risultato[i])
break
###Output
['Affitta Camere', 'Agriturismo', 'Alimentari', 'Appartamento Vacanze', 'Autostazione', 'Banca', 'Bancomat', 'Bar', 'Bed & Breakfast', 'Biblioteca', 'Birreria', 'Bus Navetta', 'Cambiovaluta', 'Camping', 'Centro Wellness', 'Centro commerciale', 'Corrieri', 'Discoteca', 'Editoria', 'Farmacia', 'Funivia', 'Gelateria', 'Grande magazzino', 'Hotel', 'Istituzioni', 'Mercatini', 'Mercato', 'Monumento', 'Museo', 'Noleggio Sci', 'Numeri utili', 'Parcheggio', 'Pasticceria', 'Piscina', 'Posta', 'Prodotti tipici', 'Pub', 'Residence', 'Rifugio', 'Ristorante', 'Scuola Sci', 'Sede Trentino Trasporti', 'Snow Park', 'Souvenir', 'Sport', 'Stadio', 'Stadio del ghiaccio', 'Stazione dei Treni', 'Taxi', 'Teatro', 'Ufficio informazioni turistiche']
###Markdown
B3 grattIl profilo di una città può essere rappresentato come una lista 2D dove gli `1` rappredentano gli edifici. Nell'esempio sotto, l'altezza dell'edificio più alto è `4` (la seconda colonna da destra)```python[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 1, 0, 1, 0], [0, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1]]```Scrivi una funzione che prende un profilo come lista 2-D di `0` e `1` e RITORNA l'altezza del grattacielo più alto, per altri esempi vedere gli assert.
###Code
#jupman-purge-io
#Credits: esercizio preso da [Edabit Tallest Skyscraper](https://edabit.com/challenge/76ibd8jZxvhAwDskb)
def gratt(mat):
#jupman-raise
n,m = len(mat), len(mat[0])
for i in range(n):
for j in range(m):
if mat[i][j] == 1:
return n-i
return 0
#/jupman-raise
assert gratt([[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1]]) == 4
assert gratt([
[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 1, 1, 0],
[1, 1, 1, 1]
]) == 3
assert gratt([
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 1, 0],
[1, 1, 1, 1]
]) == 4
assert gratt([
[0, 0, 0, 0],
[0, 0, 0, 0],
[1, 1, 1, 0],
[1, 1, 1, 1]
]) == 2
###Output
_____no_output_____
###Markdown
B4 scendisaliScrivi una funzione che date le dimensioni di `n` righe e `m` colonne RITORNA una NUOVA matrice numpy n x m con sequenze che scendono e salgono a righe alterne come negli esempi- se `m` è dispari, lancia `ValueError````python>>> scendisali(6,10)array([[0., 0., 0., 0., 0., 4., 3., 2., 1., 0.], [0., 1., 2., 3., 4., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 4., 3., 2., 1., 0.], [0., 1., 2., 3., 4., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 4., 3., 2., 1., 0.], [0., 1., 2., 3., 4., 0., 0., 0., 0., 0.]])```
###Code
import numpy as np
def scendisali(n,m):
#jupman-raise
if m%2 == 1:
raise ValueError("m deve essere pari, trovato %s" % m)
mat = np.zeros((n,m))
for i in range(0,n,2):
for j in range(m//2):
mat[i,j+m//2] = m//2 - j - 1
for i in range(1,n,2):
for j in range(m//2):
mat[i,j] = j
return mat
#/jupman-raise
assert np.allclose(scendisali(1,2), np.array([[0., 0.],
[0., 0.]]))
assert type(scendisali(1,2)) == np.ndarray
assert np.allclose(scendisali(2,6), np.array([[0., 0., 0., 2., 1., 0.],
[0., 1., 2., 0., 0., 0.]]))
assert np.allclose(scendisali(6,10), np.array([[0., 0., 0., 0., 0., 4., 3., 2., 1., 0.],
[0., 1., 2., 3., 4., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 4., 3., 2., 1., 0.],
[0., 1., 2., 3., 4., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 4., 3., 2., 1., 0.],
[0., 1., 2., 3., 4., 0., 0., 0., 0., 0.]]))
try:
scendisali(2,3)
raise Exception("Avrei dovuto fallire prima!")
except ValueError:
pass
###Output
_____no_output_____ |
pandasGeospatialABC.ipynb | ###Markdown
Table of Contents 1 geopandas installation instructions2 GIS type files2.1 Take a peek into a GIS file2.2 Points2.3 Polygons3 Geojson4 Shape files5 Coordinates5.1 Converting coordinates6 Geometric opertions with shaped7 Points in shapes8 A note about efficiency9 Additional examples
###Code
__author__ = 'Federica B. Bianco CUSP-NYU'
###Output
_____no_output_____
###Markdown
geopandas installation instructions http://geopandas.org/install.html I recommand the anaconda installation, and installing fiona and shapely first.
###Code
from __future__ import print_function, division
#importing pandas for reading and parsing of tabulated data
import pandas as pd
#importing geopandas read to plot geographical information
import geopandas as gpd
#importing fiona to handle geographical coordinates
import fiona
#import shapely to handle geographical shapes
import shapely
import pylab as pl
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
GIS type filesWe will use geocoded data with geographical information stored in 2 formats: geojson and shapefiles (.shp)Geojson: these are json format files with a geometry entry, called the_geom or geometry. The geometry entry is populated by GIS shapes, which can be points, polygons, multipolygons for more complex shapes). These are suitable for hybrid tabular datasets, which can contain all sort of information associated to a shape: e.g. zipcode geography and zipcode population. Shapefiles: these are not single files, but sets of files one of which is a shapefile (.shp) and are generally downloaded as a zipfile or tarball. The files other than the .shp extension are needed to read in the shapefile correctly, so although we refer to them collectively as "shapefile" and we explictly only load the .shp file we need the other formats as well. Take a peek into a GIS filelets take a peek at a .geojson file
###Code
!pwd
!head -3 NYC_shapefiles/Police\ Precincts.geojson
###Output
/Users/fbianco/science/Dropbox/randomprojs/smart_cities
{
"type": "FeatureCollection",
"features": [
###Markdown
That looks just like a json file, but if I tried and print one more line I would crush the python kernel: the next json entry is the "feature", which contains the geometry, which is a very long list of points that design the permeter of the first precinct, and since administrative (and generally geographical) shapes can be far more complex than say a square or a circle this is a very long list of coorcinates. { "type": "FeatureCollection", "features": [ {"type":"Feature","properties":{"precinct":"1","shape_area":"47182160.4145","shape_leng":"79979.409545"},"geometry":{"type":"MultiPolygon","coordinates":[[[[-74.0438776157395,40.69018767637665],[-74.0435059601254,40.68968735963635],[-74.04273533826982,40.69005019142044],[-74.04278433380006,40.69012097669115],[-74.04270428426766,40.690155204644306],[-74.04255372037308,40.6899627592896],[-74.0426392937119,40.68992817641333],[-74.0426938081918,40.689997259107216],[-74.04346752310265,40.68963699010347],[-74.04351637245855,40.68919103374234],[-74.04364078627412,40.68876655957014],[-74.04397458556184,40.68858240705591],[-74.0443852177728,40.688516178402686],[-74.04478399040363,40.68859566011588],[-74.04627539003668,40.689327425896714],[-74.04680284898575,40.68995325626601],[-74.04747651462345,40.68961136999828],[-74.04772962763064,40.68991531846602],[-74.04758571924786,40.68998250682616],[-74.04743126123475,40.68980388996831],[-74.04.GeoF689205500591,40.69005909832262],[-74.04720029366251,40.69042481562375],[-74.04711050698607,40.69047041285008],[-74.04711582042361,40.6906558061182]... Let's step back and take a look at more trivial shapes for a second: Points geometric shapes are defined in the shapely package
###Code
#the simplest geometry is a point
from shapely.geometry import Point
pt1 = Point((0, 0.5))
pt2 = Point((0.2, 0.1))
pt3 = Point((0, 0.3))
#points know how to plot themselves
pt1
#but without context that does not mean much...
pt2
#we can crete a geopandas GDF from these geometry point objects
pointsGPD = gpd.GeoDataFrame()
pointsGPD["geometry"] = [pt1, pt2, pt3]
#geopandas GDF also know how to plot themselves
pointsGPD.plot(color="k")
###Output
_____no_output_____
###Markdown
Polygons
###Code
from shapely.geometry import Polygon
#polygons are defined by straight lines joining points on their perimeter
pg1 = Polygon([(0, 0), (1, 0), (1, 1)])
#shapely polygons know how to plot themselves
pg1
#polygons can have interior wholes
#note also that the order of the corners matters
pg2 = Polygon([(0, 0), (1, 0), (1, 1)], [[(0.5, 0), (0.7, 0), (0.5, 0.5), (0.6, 0.5)]])
pg2
#we can crete a geopandas GDF from these hybrid geometry objects
pointsPolygGPD = gpd.GeoDataFrame()
pointsPolygGPD["geometry"] = [pt1,pt2,pt3, pg1]
pointsPolygGPD.plot()
###Output
_____no_output_____
###Markdown
Geojson Now let's load the geojson file we look briefly at before: we do that using the geopandas package, which interacts well with pandas objects, share many of the pandas functionalities, but has additional functions and objects that deal with GIS information the main geopandas object is a geopandas GeoDataFrame (we have seen them before)
###Code
precincts = gpd.GeoDataFrame.from_file("NYC_shapefiles/Police Precincts.geojson")
###Output
_____no_output_____
###Markdown
GeoDataFrames look like dataframes in pandas: tabulated objects
###Code
precincts.head()
###Output
_____no_output_____
###Markdown
but they have a geometry column which hosts the shapes, which can be points, polygons, or multipolygons
###Code
precincts.geometry
###Output
_____no_output_____
###Markdown
geodataframes know how to plot themselves!`
###Code
precincts.plot()
###Output
_____no_output_____
###Markdown
We can make a choropleth of this simple map: a map that is colorcoded by the value of one of the columns in the GDF. Here lets trivially colorcode the precicts by their area
###Code
precincts.plot?
precincts.plot(column="shape_area", cmap="bone")
###Output
_____no_output_____
###Markdown
The details of how embellish a choropleth, such as for example put a colorbar, adjust axes, lines etc, are a bit painful. However, I created a python module that will help you make choropleths of any NYC GDF: it is fine-tuned to the shape of NYC and it makes it easier to add colorbars etc, you can download it here https://github.com/fedhere/choroplethNYC
###Code
import choroplethNYC
choroplethNYC.choroplethNYC?
fig, ax, cb = choroplethNYC.choroplethNYC(precincts, "shape_area", cb=True)
###Output
_____no_output_____
###Markdown
Shape files shapefiles can also be read into GDFs with geopandas. Generally they are zipped directories when you download them
###Code
!unzip NYC_shapefiles/Police\ Precincts.zip
!ls
###Output
Archive: NYC_shapefiles/Police Precincts.zip
replace geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.dbf? [y]es, [n]o, [A]ll, [N]one, [r]ename: ^C
DPR_Inspection_001.xml
DPRshort.xml
[34mNYC_shapefiles[m[m
README.md
Spatial Joins.ipynb
TrafficVolumeCounts2012-2013_.csv
TrafficVolumeCounts2012_2013_.csv
Traffic_Volume_Counts__2012-2013_.csv
Untitled.ipynb
Untitled1.ipynb
Untitled2.ipynb
UsingShapeFiles.ipynb
[34mWIMLDSSmartCities[m[m
ZIP_CODE_040114.dbf
ZIP_CODE_040114.prj
ZIP_CODE_040114.sbn
ZIP_CODE_040114.sbx
ZIP_CODE_040114.shp
ZIP_CODE_040114.shp.xml
ZIP_CODE_040114.shx
bigquery_credentials.dat
census00_metadata.csv
census10_metadata.csv
census10_metadata.numbers
[34mcrimeData[m[m
geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.dbf
geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.prj
geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.shp
geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.shx
[34mnyu_2451_34510[m[m
parkInspectionParser.ipynb
parksInspections.csv
[34mroadsLength[m[m
[34mtmp[m[m
tmp.py
[34muber[m[m
[34muber-tlc-foil-response[m[m
###Markdown
the files got unpacked under thename geo_export_be3a4049-303e-42f1-a875-70e07e3011a3 and you need to read in only the shape file explicitly, though you need the other files to exist in the same directory for the shapefile to be read correctly:
###Code
gpd.GeoDataFrame.from_file("geo_export_be3a4049-303e-42f1-a875-70e07e3011a3.shp")
###Output
_____no_output_____
###Markdown
Coordinates Now let's spend a second discussing what the numbers we read in the geometry feature actually mean: geographies are represented on a 2D medium (screen, paper) by chosing a projection to draw spherical surfaces on a plane. Any projection causes some degree of deformation. In GIS convension different numbers indicate different projections. We will use the EPSG conventions. Once we choose the projection each point in our geography is identified uniquely by 2 numbers, the coordinates. For NYC there are 2 main coordinates system in use:**epsg = 4326 latitude-longitude (also known as WGS84).** Equatorial coordinates. The numnbers are in degrees. The precinct file above is in epsg 4326. 40.7128° N, 74.0059° W are the geographical coordinates of NYC (Manhattan City Hall). Notice that equatorial coordinates can be identified with +/- signs, or E/W and N/S, but in GIS the +/- is the only thing that makes sense, because we want to deal with numbers.https://en.wikipedia.org/wiki/World_Geodetic_System WGS84 comprises a standard coordinate frame for the Earth, a datum/reference ellipsoid for raw altitude data, and a gravitational equipotential surface (the geoid) that defines the nominal sea level.**epsg = 2263 New York Long Island (ftUS) (also NAD83).** The State Plane Coordinate System (SPCS) is a set of 124 geographic zones or coordinate systems designed for specific regions of the United States. By ignoring the curvature of the Earth it uses a simple Cartesian coordinate system to specify locations rather than a more complex spherical coordinate system. Therefore each "zone" is only accurate in the vicinity of its center. The most commonly used SPCS for NYC is the New York Long Island system. The coordinates are in feet, easting and northing (increasing numbers indicate farter east and farther north locations), notice that (0,0) is outside of the epsg = 2263 region! Converting coordinates
###Code
#check is a coordinate system is set first:
precincts.crs
from fiona.crs import from_epsg
## if the coordinates are not set they can be set with
#precincts.crs = from_epsg(4326)
#after one understands what it may be, which may be tricky, but again, for NYC it is generally one of the systems below
## epsg=4326: lat/on | 26918: NAD83/UTM zone 18N | epsg=2263 is US feet
#convert to State Plane: for example this is needed to calculate areas, since epsg=2263 is in feet, while 4326 is in degrees
#area in feet sq
NYC_Area = precincts.to_crs(epsg=2263).geometry.area.sum()
#convert to miles sq
NYC_Area /= (2.788 * 10**7)
print ('total NYC land area: {:.1f} (mi^2)'.format(NYC_Area))
###Output
total NYC land area: 302.2 (mi^2)
###Markdown
Geometric opertions with shaped let's read in the zipcodes, and find out which precincts are in Queens
###Code
#a shapefile with the 5 boroughs boundaries
boroughs = gpd.GeoDataFrame.from_file("NYC_shapefiles/Borough Boundaries.geojson")
boroughs
precincts.intersects(boroughs.iloc[4].geometry)
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
precincts[precincts.intersects(boroughs.iloc[4].geometry)].plot(ax=ax)
###Output
_____no_output_____
###Markdown
A lot more geometry gymnastic can be performed with geopandas. See here http://geopandas.org/geometric_manipulations.html Points in shapesyou can also ask how many points are within a shape. Let's use the 311 complaints, downloading them [as instructed here](https://github.com/WiMLDS/smart_cities/wiki/Data-Access-Instructions). Careful with these queries: the datasets are very large. Lets ask for 1000 lines of complaints related to noise.
###Code
from pandas.io.gbq import read_gbq
project = "spheric-crow-161317"
sample_query = "SELECT * FROM `bigquery-public-data.new_york.311_service_requests`" +\
" WHERE complaint_type LIKE '%Noise%' LIMIT 1000"
df = read_gbq(query=sample_query, project_id=project, dialect='standard')
df.head()
###Output
Requesting query... ok.
Query running...
Query done.
Cache hit.
Retrieving results...
Got 1000 rows.
Total time taken 2.22 s.
Finished at 2017-03-25 12:12:53.
###Markdown
this dataframe has geographical coordinates, but they are not encoded as GIS information
###Code
df[['latitude','longitude','location']].head()
#turn lat long into shapely points
df.dropna(axis=0, subset=['latitude','longitude'], inplace=True)
df['lonlat'] = list(zip(df.longitude, df.latitude))
df['geometry'] = df[['lonlat']].applymap(lambda x:shapely.geometry.Point(x))
df.head()
noisegdf = gpd.GeoDataFrame(df)
noisegdf.crs = {'init': 'epsg:4326'} ## noisegdf did not have a defined crs,
# but it is clearly in 4326: lat/lon
noisegdf.head()
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
noisegdf.plot(ax=ax)
###Output
_____no_output_____
###Markdown
Let's count the complaints by borough. This information is already in the dataframe, but let's just demonstrate how you could do it. This is also painfully slow... see below for a more "geopythonic" way to do that!
###Code
noisegdf['borough'] = np.array([np.where(boroughs.geometry.contains(noise))[0][0]
for noise in noisegdf.geometry.values[:]])
noisegdf.dropna(subset=['borough'], inplace=True)
noisegdf['borough'] = noisegdf['borough'].astype(int)
for i in range(len(boroughs)):
print (boroughs.iloc[i].boro_name, "%.2f"%(sum(noisegdf['borough'] == i) *
100. / len(noisegdf)),'%')
###Output
Staten Island 3.51 %
Bronx 17.85 %
Manhattan 30.39 %
Brooklyn 29.29 %
Queens 18.96 %
###Markdown
map the noise
###Code
colors = ["r", "y", "b", "g", 'purple']
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
for i in range(len(boroughs)):
noisegdf[noisegdf.borough == i].dropna().plot(ax=ax, color=colors[i])
###Output
_____no_output_____
###Markdown
Finally, lets print the percentage of calls on top of the neighborhood shape
###Code
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
for i in range(len(boroughs)):
center = boroughs.iloc[i].geometry.centroid.coords[0]
pl.text(center[0], center[1], "{0:.1f}%".format(sum(noisegdf['borough'] == i) *
100. / len(noisegdf)),
fontsize=20, horizontalalignment="center")
###Output
_____no_output_____
###Markdown
A note about efficiencythe way I show how to count complaints in each borough with a list comprehension is **very inefficient**. GeoPandas has faster functions to do spatial analytics tasks!checkout the spatial joint *sjoin* method:
###Code
gpd.sjoin?
noisegdf = gpd.GeoDataFrame(df)
noisegdf.crs = {'init': 'epsg:4326'}
noisegdf.head()
#use %time to see howmuch better sjoin is
%time noisegdf = gpd.sjoin(noisegdf, boroughs, how="left", op='within')
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
for i in range(len(boroughs)):
center = boroughs.iloc[i].geometry.centroid.coords[0]
pl.text(center[0], center[1], "{0:.1f}%".format(sum(noisegdf['borough'] == i) *
100. / len(noisegdf)),
fontsize=20, horizontalalignment="center")
%time noisegdf['borough'] = np.array([np.where(boroughs.geometry.contains(noise))[0][0]
for noise in noisegdf.geometry.values[:]])
fig, ax = choroplethNYC.choroplethNYC(boroughs, alpha=0)
for i in range(len(boroughs)):
center = boroughs.iloc[i].geometry.centroid.coords[0]
pl.text(center[0], center[1], "{0:.1f}%".format(sum(noisegdf['borough'] == i) *
100. / len(noisegdf)),
fontsize=20, horizontalalignment="center")
###Output
_____no_output_____ |
numpy_favtutor.ipynb | ###Markdown
###Code
# import numpy library
import numpy as np
# print current np version
np.__version__
# adding two equal shaped ndarray(s)
a = np.array([[1,2,3],
[4,5,6]])
b = np.array([[10,11,12],
[13,14,15]])
a.shape
b.shape
c = a + b
c
# multiply an ndarray by a scalar value
b = 2 * a
b
# create a 4x4 dimension identity matrix
i = np.eye(4)
i
# create a list with values ranging from 0 to 26
a = np.array([x for x in range(27)])
a.shape
# reshaping an existing ndarray
o = a.reshape((3,3,3))
o
# array datatype conversion
a = np.array([[2.5, 3.8, 0],
[4.7, 2.9, 1.56]])
a
a.dtype
# convert float to int
o = a.astype('int')
o
o.dtype
# convert int to boolean
o = o.astype('bool')
o
i = np.eye(4)
i
o = i.astype('bool')
o
# stacking ndarrays
a1 = np.array([[1,2,3],
[4,5,6]])
a2 = np.array([[7,8,9],
[10,11,12]])
a1
a2
o = np.hstack((a1, a2))
o
o = np.vstack((a1, a2))
o
a1 = np.array([[1,2],[3,4]])
a2 = np.array([[5,6],[7,8],[9,10]])
np.vstack((a1, a2))
# custom sequence generation
list_of_numbers = [x for x in range(0, 101, 2)]
o = np.array(list_of_numbers)
o
# matching positions
a = np.array([1,2,3,4,5])
b = np.array([1,3,2,4,5])
np.where(a == b)
for i in np.where(a == b):
print(a[i])
# generation of equally spaced number within a range
o = np.linspace(0, 100, 5)
o
o = np.linspace(0, 1, 5)
o
# matrix generation with one particular value
o = np.ones((2, 3))
o
o = o * 5
o = o.astype('int')
o
o = np.full((2, 3), 5)
o
o = np.full((3, 3), 'True')
o
o = np.full((3, 3), 1)
o
o = np.ones((3,3)).astype('int')
o
# array generation by repeatition
a = np.array([[1,2,3], [4,5,6]])
a
o = np.tile(a, 10)
o
o.shape
# array generation or random integers within a range
np.random.seed(123)
o = np.random.randint(0, 10, size=(5, 5))
o
np.random.seed(12)
o = np.random.randint(0, 10, size=(5, 5))
o
# array gernation of random number following normal distribution
np.random.seed(123)
o = np.random.normal(size=(3,3))
o
# matrix multiplication
a = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
b = np.array([[2,3,4],
[5,6,7],
[8,9,10]])
a * b # wrong marix multiplication
a@b # correct matrix multiplication
np.matmul(a, b) # correct matrix multiplication
b@a
# matrix transpose
a = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
a_transpose = a.T
a_transpose
a@a_transpose
a = np.array([[1,2],[3,4]])
a
b = np.linalg.inv(a)
a@b
# sine of an angle (radians)
angles = np.array([3.14, 3.14/2, 6.28, 3*3.14/2])
sine_of_angles = np.sin(angles)
sine_of_angles
# cosine of an angle (radians)
angles = np.array([3.14, 3.14/2, 6.28, 3*3.14/2])
sine_of_angles = np.cos(angles)
sine_of_angles
# order array elements
array = np.array([10,1,5,2])
indexes = np.argsort(array)
indexes
for i in indexes:
print(array[i])
###Output
_____no_output_____ |
train_reproducible.ipynb | ###Markdown
Setup and parameter selection
###Code
# set parameters
# directories
root = pathlib.Path.cwd()
main_file = 'candidate.npz'
sub_file = 'cifar.npz'
# TODO: Experiments with synthetic noise and fractions
# mix_cifar = True
# inject_noise = 0.05
# model and output
model = 'resnet'
outpath = pathlib.Path.cwd() / 'results' / model / 'train_logs.txt'
# evaluate mode with recording indices
# evaluate = True
# track_correct=True
# resume_file = 'model_best.pth.tar'
# run parameters
epochs = 50
print_freq = 1000
gpu = 0
batch_size = 128 if model in ['densenet', 'pyramidnet', 'resnet_basic'] else 256
args = trainArgs(root, epochs=epochs, batch_size=batch_size, print_freq=print_freq, outpath=outpath,
main_file=main_file,
sub_file=sub_file,
# evaluate=evaluate,
# track_correct=track_correct,
# resume=resume_file,
model=model, gpu=gpu)
args.outpath.parent.mkdir(exist_ok=True, parents=True)
if args.seed is not None:
random.seed(args.seed)
torch.manual_seed(args.seed)
cudnn.deterministic = True
warnings.warn('You have chosen to seed training. '
'This will turn on the CUDNN deterministic setting, '
'which can slow down your training considerably! '
'You may see unexpected behavior when restarting '
'from checkpoints.')
if args.gpu is not None:
warnings.warn('You have chosen a specific GPU. This will completely '
'disable data parallelism.')
if args.dist_url == "env://" and args.world_size == -1:
args.world_size = int(os.environ["WORLD_SIZE"])
args.distributed = args.world_size > 1 or args.multiprocessing_distributed
ngpus_per_node = torch.cuda.device_count()
if args.multiprocessing_distributed:
# Since we have ngpus_per_node processes per node, the total world_size
# needs to be adjusted accordingly
args.world_size = ngpus_per_node * args.world_size
# Use torch.multiprocessing.spawn to launch distributed processes: the
# main_worker process function
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
else:
# Simply call main_worker function
main_worker(args.gpu, ngpus_per_node, args)
###Output
_____no_output_____ |
_notebooks/2022-05-28_Geospatial02.ipynb | ###Markdown
Geospatial_02_exercise여러분은 조류 보호 전문가이고 보라색 마틴의 이동 패턴을 이해하려고 합니다. 당신의 연구에서, 당신은 이 새들이 전형적으로 여름 번식기를 미국 동부에서 보내고, 겨울을 나기 위해 남아메리카로 이주한다는 것을 발견한다. 하지만 이 새는 멸종위기에 처해 있기 때문에, 여러분은 이 새들이 방문할 가능성이 더 높은 장소를 자세히 들여다보고자 합니다.남아메리카에는 몇몇 보호지역(https://www.iucn.org/theme/protected-areas/about))이 있는데, 이 지역들은 이주(또는 살아있는) 종들이 번성할 수 있는 최고의 기회를 갖도록 하기 위해 특별한 규정에 따라 운영되고 있다. 당신은 보라색 마틴이 이 지역을 방문하는 경향이 있는지 알고 싶다. 이 질문에 답하기 위해, 여러분은 11마리의 다른 새들의 연중 위치를 추적하는 최근 수집된 데이터를 사용할 것입니다.시작하기 전에 아래 코드 셀을 실행하여 모든 것을 설정하십시오.
###Code
import pandas as pd
import geopandas as gpd
from shapely.geometry import LineString
from learntools.core import binder
binder.bind(globals())
from learntools.geospatial.ex2 import *
###Output
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
###Markdown
1) 데이터를 로드합니다.다음 코드 셀(변경 없이)을 실행하여 GPS 데이터를 팬더 데이터 프레임 'birds_df'에 로드합니다.
###Code
# Load the data and print the first 5 rows
birds_df = pd.read_csv("data/purple_martin.csv", parse_dates=['timestamp'])
print("There are {} different birds in the dataset.".format(birds_df["tag-local-identifier"].nunique()))
birds_df.head()
###Output
There are 11 different birds in the dataset.
###Markdown
데이터 세트에는 11개의 새가 있으며, 각 새는 "태그-로컬-식별자" 열의 고유한 값으로 식별된다. 각 새들은 일 년 중 다른 시간에 수집되는 몇 가지 치수를 가지고 있다.다음 코드 셀을 사용하여 GeoDataFrame "birds"를 만듭니다. - birds는 birds_df의 모든 열과 (경도, 위도) 위치의 점 객체를 포함하는 "기하학" 열을 가져야 한다. - 'birds'의 CRS를 '{'init''로 설정합니다. 'epsg:4326'}?
###Code
# Your code here: Create the GeoDataFrame
birds = gpd.GeoDataFrame(birds_df, geometry=gpd.points_from_xy(birds_df["location-long"], birds_df["location-lat"]))
# Your code here: Set the CRS to {'init': 'epsg:4326'}
birds.crs = {'init' :'epsg:4326'}
# Check your answer
q_1.check()
###Output
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
###Markdown
2) 데이터를 표시합니다.그런 다음 GeoPandas에서 'natural earth_lowres' 데이터 세트를 로드하고, '아메리카'를 미주(북미 및 남미)의 모든 국가의 경계를 포함하는 GeoDataFrame으로 설정한다. 변경 사항 없이 다음 코드 셀을 실행합니다.
###Code
# Load a GeoDataFrame with country boundaries in North/South America, print the first 5 rows
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
americas = world.loc[world['continent'].isin(['North America', 'South America'])]
americas.head()
###Output
_____no_output_____
###Markdown
다음 코드 셀을 사용하여 (1) "아메리카" GeoDataFrame의 국가 경계와 (2) "birds_gdf" GeoDataFrame의 모든 점을 보여주는 단일 그래프를 만듭니다. 여기서는 특별한 스타일링에 대해 걱정할 필요 없이 모든 데이터가 올바르게 로드되었는지 신속하게 확인하기 위해 예비 플롯을 생성하기만 하면 됩니다. 특히 새를 구분하기 위해 포인트를 컬러 코딩할 염려도 없고, 시작 포인트와 끝 포인트를 구분할 필요도 없다. 우리는 연습의 다음 부분에서 그것을 할 것이다.
###Code
# Your code here
ax = americas.plot(figsize=(10,10), color='white', linestyle=':', edgecolor='gray')
birds.plot(ax=ax, markersize=10)
# Get credit for your work after you have created a map
q_2.check()
###Output
_____no_output_____
###Markdown
3) 각각의 새는 어디에서 출발하고 그 여정을 끝냅니까? (1부)이제, 우리는 각각의 새들의 길을 더 자세히 볼 준비가 되었습니다. 다음 코드 셀을 실행하여 두 개의 GeoDataFrame을 생성합니다.- 'path_gdf'에는 각 버드의 경로를 보여주는 LineString 개체가 포함되어 있습니다. LineString() 메서드를 사용하여 Point 객체 목록에서 LineString 객체를 생성합니다.- 'start_gdf'에는 각 새의 시작점이 포함되어 있습니다.
###Code
# GeoDataFrame showing path for each bird
path_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: LineString(x)).reset_index()
path_gdf = gpd.GeoDataFrame(path_df, geometry=path_df.geometry)
path_gdf.crs = {'init' :'epsg:4326'}
# GeoDataFrame showing starting point for each bird
start_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: x[0]).reset_index()
start_gdf = gpd.GeoDataFrame(start_df, geometry=start_df.geometry)
start_gdf.crs = {'init' :'epsg:4326'}
# Show first five rows of GeoDataFrame
start_gdf.head()
###Output
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
###Markdown
다음 코드 셀을 사용하여 각 버드의 최종 위치를 포함하는 GeoDataFrame 'end_gdf'를 만듭니다. - 형식은 "tag-local-identifier" 및 "geometry" 열에서 점 객체를 포함하는 두 개의 열("tag-local-identifier")과 "start_gdf"의 형식과 동일해야 한다.- 'end_gdf'의 CRS를 '{'init'로 설정합니다. 'epsg:4326'}:
###Code
# Your code here
end_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: x[-1]).reset_index()
end_gdf = gpd.GeoDataFrame(end_df, geometry=end_df.geometry)
end_gdf.crs = {'init': 'epsg:4326'}
# Check your answer
q_3.check()
###Output
c:\Users\User\anaconda3\lib\site-packages\pyproj\crs\crs.py:130: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
in_crs_string = _prepare_from_proj_string(in_crs_string)
###Markdown
4) 각각의 새는 어디에서 출발하고 그 여정을 끝냅니까? (2부)위 질문('path_gdf', 'start_gdf', 'end_gdf')의 GeoDataFrames를 사용하여 모든 새의 경로를 단일 지도에서 시각화한다. 또한 "아메리카" GeoDataFrame을 사용할 수도 있습니다.
###Code
# Your code here
ax = americas.plot(figsize=(10, 10), color='white', linestyle=':', edgecolor='gray')
start_gdf.plot(ax=ax, color='red', markersize=30)
path_gdf.plot(ax=ax, cmap='tab20b', linestyle='-', linewidth=1, zorder=1)
end_gdf.plot(ax=ax, color='black', markersize=30)
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
# Get credit for your work after you have created a map
q_4.check()
###Output
_____no_output_____
###Markdown
5) 남아메리카의 보호 구역은 어디인가? (1부)모든 새들은 결국 남아메리카의 어딘가에 있는 것처럼 보입니다. 하지만 그들은 보호구역으로 갈까요?다음 코드 셀에서는 남아메리카에 있는 모든 보호 지역의 위치를 포함하는 GeoDataFrame "protected_areas"를 만듭니다. 해당 셰이프 파일은 파일 경로 'protected_filepath'에 있습니다.
###Code
# Path of the shapefile to load
protected_filepath = "data/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile-polygons.shp"
# Your code here
protected_areas = gpd.read_file(protected_filepath)
# Check your answer
q_5.check()
###Output
_____no_output_____
###Markdown
6) 남아메리카의 보호 구역은 어디인가? (2부)'protected_areas' GeoDataFrame을 사용하여 남미에서 보호 지역의 위치를 표시하는 그래프를 만듭니다. (_여러분은 어떤 보호 구역은 육지에 있고 다른 보호 구역은 해양에 있다는 것을 알게 될 것입니다._)
###Code
# Country boundaries in South America
south_america = americas.loc[americas['continent']=='South America']
# Your code here: plot protected areas in South America
ax = south_america.plot(figsize=(10,10), color='white', edgecolor='gray')
protected_areas.plot(ax=ax, alpha=0.4)
# Uncomment to see a hint
# Get credit for your work after you have created a map
q_6.check()
###Output
_____no_output_____
###Markdown
7) 남아메리카의 몇 퍼센트가 보호되고 있나요?여러분은 남아메리카의 몇 퍼센트가 보호되고 있는지 알아내는 것에 관심이 있습니다. 그래서 여러분은 남아메리카의 얼마나 많은 부분이 새들에게 적합한지 알 수 있습니다. 첫 번째 단계로 남아메리카에 있는 모든 보호 토지의 총 면적(해상 면적은 제외)을 계산합니다. 이렇게 하려면 "REP_AREA" 열과 "REP_M_AREA" 열을 사용합니다. 이 열에는 총 면적과 총 해양 면적이 각각 제곱 킬로미터 단위로 포함됩니다.아래의 코드 셀을 변경하지 않고 실행합니다.
###Code
P_Area = sum(protected_areas['REP_AREA']-protected_areas['REP_M_AREA'])
print("South America has {} square kilometers of protected areas.".format(P_Area))
###Output
South America has 5396761.9116883585 square kilometers of protected areas.
###Markdown
Then, to finish the calculation, you'll use the `south_america` GeoDataFrame.
###Code
south_america.head()
###Output
_____no_output_____
###Markdown
다음 단계에 따라 남아메리카의 총 면적을 계산합니다.- 각 다각형(EPSG 3035를 CRS로 하여)의 '면적' 속성을 이용하여 각국의 면적을 계산하고 그 결과를 합산한다. 계산된 면적은 제곱미터 단위로 표시됩니다.- 답을 제곱 킬로미터 단위로 변환합니다.
###Code
# Your code here: Calculate the total area of South America (in square kilometers)
totalArea = sum(south_america.geometry.to_crs(epsg=3035).area) / 10**6
# Check your answer
q_7.check()
###Output
_____no_output_____
###Markdown
아래의 코드 셀을 실행하여 보호되는 남아메리카의 비율을 계산하십시오.
###Code
# What percentage of South America is protected?
percentage_protected = P_Area/totalArea
print('Approximately {}% of South America is protected.'.format(round(percentage_protected*100, 2)))
###Output
Approximately 30.39% of South America is protected.
###Markdown
8) 남아메리카의 새들은 어디에 있나요?그렇다면, 그 새들은 보호 구역에 있을까요? 남아메리카에서 발견된 모든 새의 위치를 보여주는 그래프를 만듭니다. 또한 남아메리카에 있는 모든 보호 구역의 위치를 표시합니다.순수 해양 영역(육지 구성요소가 없는)인 보호 영역을 제외하려면 "MARINE" 열을 사용합니다(그리고 'protected_areas[protected_areas]'의 행만 표시).마린']!='2']는 'protected_filename' GeoDataFrame)의 모든 행 대신 발음한다.
###Code
# Your code here
ax = south_america.plot(figsize=(10,10), color='white', edgecolor='gray')
protected_areas[protected_areas['MARINE']!='2'].plot(ax=ax, alpha=0.4, zorder=1)
birds[birds.geometry.y < 0].plot(ax=ax, color='red', alpha=0.6, markersize=10, zorder=2)
# Get credit for your work after you have created a map
q_8.check()
###Output
_____no_output_____ |
DSA/math/myPow.ipynb | ###Markdown
Implement pow(x, n), which calculates x raised to the power n (xn).Example 1: Input: 2.00000, 10 Output: 1024.00000Example 2: Input: 2.10000, 3 Output: 9.26100Example 3: Input: 2.00000, -2 Output: 0.25000 Explanation: 2-2 = 1/22 = 1/4 = 0.25Note: -100.0 < x < 100.0 n is a 32-bit signed integer, within the range [−231, 231 − 1]
###Code
class Solution:
def myPow(self, x, n):
'''
:param x: float
:param n: int
:return: float
'''
if not n:
return 1
if n < 0:
return 1 / self.myPow(x, -n)
if n % 2:
return x * self.myPow(x, n-1)
# execute when n is even
return self.myPow(x*x, n/2)
# test
x = 2.10000
n = 3
print(Solution().myPow(x, n))
class Solution:
def myPow(self, x, n):
'''
:param x: float
:param n: int
:return: float
'''
return x ** n
# test
x = 2.10000
n = 3
print(Solution().myPow(x, n))
###Output
9.261000000000001
|
.ipynb_checkpoints/tasksmachineLearningandStats-checkpoint.ipynb | ###Markdown
Tasks Assignment Name : Sinead Frawley ID : G00376349 Jupyter notebook for researching, developing and documenting assessment task set for the GMIT module Machine Learning and Statistics. *Task 1* This task is to write a function callled sqrt2 that calculates and prints to the screen the square root of 2 to 100 decimal places. Research on Calculation Method The method taken to calculate sqaure root of two is the **Newton Sqaure root method** Newtonian Optimization $$0 = f(x_0) + f'(x_0)(x_1 -x_0)$$$$x_1 - x_0 = - \frac{f(x_0)}{f'(x_0)}$$ $$x_1 = x_0 -\frac{f(x_0)}{f'(x_0)} $$ "*Newtonian optimization is one of the basic ideas in optimization where function to be optimized is evaluated at a random point. Afterwards, this point is shifted in the negative direction of gradient until convergence.*"[[1]](https://medium.com/@sddkal/newton-square-root-method-in-python-270853e9185d) $$a = x^2$$For, $$f(x) = x^2 - a$$$$f'(x)=x^2 -a $$ $$f(x) = 2x$$$$\frac{f(x)}{f'(x)} = \frac{x^2 -a}{2x} = \frac{x -\frac{a}{x}}{2}$$Since,$$x_{n+1} - x_n = -\frac{f(x_n)}{f'(x_n)}$$$$x_{n+1} = x_n -\frac{x_n - \frac{a}{x_n}}{2}$$$$x_{n+1} = \frac{x_n - \frac{a}{x_n}}{2}$$ A classic algorithm that illustrates many of these concerns is “Newton’s” method to compute squareroots $x =√a$ for $a > 0$, i.e. to solve $x^2 = a$. The algorithm starts with some guess x1 > 0 andcomputes the sequence of improved guesses [[2]](https://math.mit.edu/~stevenj/18.335/newton-sqrt.pdf ) $$x_{n+1} = \frac{1}{2}(x_{n} + \frac{a}{x_{n}})$$.
###Code
def sqrt2( number_iters = 500):
a = float(2) # number to get square root of
for i in range(number_iters): # iteration number
a = 0.5 * (a + 2 / a) # update
print("{:.100f}".format(a))
sqrt2(2)
###Output
1.4166666666666665186369300499791279435157775878906250000000000000000000000000000000000000000000000000
###Markdown
The code from above is based on the function newton_method in [[1]](https://medium.com/@sddkal/newton-square-root-method-in-python-270853e9185d) *Task 2* This Task is on The Chi-squared test for independence is a statistical hypothesis test like a t-test. It is used to analyse whether two categorical variablesare independent. The Wikipedia article gives the table below as an example [[7]](https://en.wikipedia.org/wiki/Chi-squared_test), stating the Chi-squared value based on it is approximately 24.6. I used scipy.statsto verify this value and calculated the associated p value. Research on Chi Sqaured Tests The chi-square test is often used to assess the significance (if any) of the differences among k different groups. The null and alternate hypotheses of the test, are generally written as:H0: There is no significant difference between two or more groups.HA There exists at least one significant difference between two or more groups.The chi-square test statistic, denoted $x^2$, is defined as the following:[[3]](https://aaronschlegel.me/chi-square-test-independence-contingency-tables.html) $$x^2=\sum_{i=1}^r\sum_{i=1}^k\frac{(O_{ij} -E_{ij})^2}{E_{ij}}$$ Where $Oi_{j}$ is the i-th observed frequency in the j-th group and $E_{ij}$ is the corresponding expected frequency. The expected frequency can be calculated using a common statistical analysis. The expected frequency, typically denoted $E_{cr}$, where c is the column index and r is the row index. Stated more formally, the expected frequency is defined as: $$E_{cr}= \frac{(\sum_{i=0}^{n_r}r_i)((\sum_{i=0}^{n_c}c_i)}{n}$$ Where n is the total sample size and nc,nr are the number of cells in row and column, respectively. The expected frequency is calculated for each 'cell' in the given array. Analysis of data using Chi Squared Test From the data in [[7]](https://en.wikipedia.org/wiki/Chi-squared_test) . I have created calulcation on chi sqaured test belowThe two hypotheses are.1. Area and type of worker are independent.2. Area and type of worker are not independent.
###Code
import pandas as pd
data = {'A':[90, 30, 30, 150], 'B':[60, 50, 40, 150], 'C':[104, 51, 45, 200],
'D':[95, 20, 35, 150], 'Total':[349, 151, 150, 650]}
df = pd.DataFrame(data,index=['White collar', 'Blue collar', 'No Collar', 'Total'])
df
obs = df.iloc[0:3, 0:4]
obs
###Output
_____no_output_____
###Markdown
Expected Results Table Calculate "Expected Value" for each entry:Multiply each row total by each column total and divide by the overall total:
###Code
df_exp = df.copy()
for i in range(3):
for j in range(4):
df_exp.iloc[i,j] = df_exp.iloc[-1,j]*df_exp.iloc[i,-1]/df_exp.iloc[-1,-1]
j += 1
df_exp = df_exp.drop(['Total'], axis=1).drop(['Total'], axis=0)
df_exp.round(2)
###Output
_____no_output_____
###Markdown
Partial Chi-squared value Results TableSubtract expected from observed, square it, then divide by expected:In other words, use formula $\frac{(O-E)^2}{E}$ where- O = Observed (actual) value - E = Expected value
###Code
df_chi = (obs.subtract(df_exp)**2)/df_exp
df_chi
###Output
_____no_output_____
###Markdown
Now add up those calculated values:
###Code
chi2_value = df_chi.sum().sum()
chi2_value.round(2)
###Output
_____no_output_____
###Markdown
Python chi square program
###Code
from scipy.stats import chi2_contingency
import numpy as np
obs = np.array([[90, 60, 104,95], [30, 50, 51,20],[30,40,45,35]])
chi2_contingency(obs)
chi2_stat, p_val, dof, ex = chi2_contingency(obs)
print("===Chi2 Stat===")
print(chi2_stat)
print("\n")
print("===Degrees of Freedom===")
print(dof)
print("\n")
print("===P-Value===")
print(p_val)
print("\n")
print("===Contingency Table===")
print(ex)
###Output
===Chi2 Stat===
24.5712028585826
===Degrees of Freedom===
6
===P-Value===
0.0004098425861096696
===Contingency Table===
[[ 80.53846154 80.53846154 107.38461538 80.53846154]
[ 34.84615385 34.84615385 46.46153846 34.84615385]
[ 34.61538462 34.61538462 46.15384615 34.61538462]]
###Markdown
Calculate Critual Value
###Code
from scipy.stats import chi2
crit_value = chi2.ppf(q=0.95, df=6)
crit_value.round(2)
###Output
_____no_output_____
###Markdown
Analytics of calculations The calculate Chi-squared value 24.57 is higher than the critical value 12.59 for a 5% significance level and degrees of freedom in the sampled data. As a result we can **reject the null hypotheses that the categories are independent of each other** **Task 3** Standard Deviation With Standard Deviation you can get a handle on whether your data are close to the average or they are spread out over a wide range. For example, if an teacher wants to determine if the grades in one of his/her class seem fair for all students, or if there is a great disparity, he/she can use standard deviation. To do that, he/she can find the average of the salaries in that department and then calculate the standard deviation. In general, a low standard deviation means that the data is very closely related to the average, thus very reliable and a high standard deviation means that there is a large variance between the data and the statistical average, thus not as reliable[[4]](https://towardsdatascience.com/using-standard-deviation-in-python-77872c32ba9b) Population Standard Deviation $$\sigma = \frac{\sqrt{\sum(X_i - \mu)^2}}{N}$$ $\sigma$ = population standard deviation $\sum$ = sum of $X_i$ = each value in the sample $\mu$= population meanN= number of values in the sample This standard deviation equation **Numpy** [[5]](https://towardsdatascience.com/why-computing-standard-deviation-in-pandas-and-numpy-yields-different-results-5b475e02d112)uses by default Sample Stanadard Deviation When data is collected it is actually quite rare that we work with populations. It is more likely that we will be working with samples of populations rather than whole populations itself.thus better to use sample standard deviation equation . $$\sigma = \frac{\sqrt{\sum(X_i - \mu)^2}}{N - 1}$$ $\sigma$ = population standard deviation $\sum$ = sum of $X_i$ = each value in the sample $\mu$= population meanN= number of values in the sample Diference between population and sample strandard deviation The difference is in the denominator of the equation. In sample standard deviation its divided by N- 1 instead of only using N as when compute population standard deviation.The reason for this is that in statistics in order to get an unbiased estimator for population standard deviation when calculating it from the sample we should be using (N-1). This is called one degree of freedom, we subtract 1 in order to get an unbiased estimator.[[6]](https://towardsdatascience.com/why-computing-standard-deviation-in-pandas-and-numpy-yields-different-results-5b475e02d112) So is sample standard devaition better to use ? N-1 should be used in order to get the unbiased estimator. And this is usually the case as mostly dealing with samples, not entire populations. This is why pandas default standard deviation is computed using one degree of freedom.This may, however, may not be always the case so be sure what your data is before you use one or the other. Code samples to prove the case for sample stardard deviation Simulate population data I had created a dataset using normal distribtion , only contain x values to simplify things .It has N = 1,000,000 points and its the is 0.0, and the standard deviation is 1.0
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
mu, sigma = 0, 1 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000000)
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
Calculate population stardard deviation of the entire sample
###Code
np.sqrt(np.sum((s - np.mean(s))**2)/len(s))
###Output
_____no_output_____
###Markdown
Dealing with a sample First create a subnet of the orginal sample 10 datapoints
###Code
np.random.shuffle(s)
a = s[0:9]
count, bins, ignored = plt.hist(a, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
This shows that the sample as why too small as its nothing like the population distribtion
###Code
np.sqrt(np.sum((a - np.mean(a))**2)/(len(a) -1) )
###Output
_____no_output_____
###Markdown
Dataset of 100 data points
###Code
#sample of 100 data points
b = s[0:99]
count, bins, ignored = plt.hist(b, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
The 100 data point distribution is more similar to the population distrubution
###Code
# sample stardard deviation
np.sqrt(np.sum((b - np.mean(b))**2)/(len(b) -1) )
###Output
_____no_output_____
###Markdown
Data set with 10000 data points
###Code
#sample of 100 data points
c = s[0:99999]
count, bins, ignored = plt.hist(c, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
plt.show()
# sample stardard deviation
np.sqrt(np.sum((c - np.mean(c))**2)/(len(c) -1 ) )
###Output
_____no_output_____
###Markdown
Analysis of results - Mostly sample standard deviation used for the MS Excel STDEV.S function does appear to produce a less bias standard deviation, but it not without bias and on occasion can provide a less accurate estimate of standard deviation. - If the sample is very small like the 10 data point sample can give inaccurate results as 10 point distribtuin can be nowhere near the distribution of the population as on the analysis above - As both the population size and sample proportion of the population increase, the accuracy of standard deviation calculation based on the sample improve and the close together both the STDEV.S and STDEV.P function method results become. Task 4 Use scikit-learn to apply k-means clustering to Fisher’s famous Iris data set The iris dataset The features present in the dataset are:- Sepal Width- Sepal Length- Petal Width- Petal Length Clustering is an unsupervisedlearning method that allows us to group set of objects based on similar characteristics. In general, it can help you find meaningful structure among your data, group similar data together and discover underlying patterns.One of the most common clustering methods is K-means algorithm. The goal of this algorithm isto partition the data into set such that the total sum of squared distances from each point to the mean point of the cluster is minimized.[[6]](https://medium.com/@belen.sanchez27/predicting-iris-flower-species-with-k-means-clustering-in-python-f6e46806aaee) K means works through the following iterative process:[[6]](https://medium.com/@belen.sanchez27/predicting-iris-flower-species-with-k-means-clustering-in-python-f6e46806aaee)1. Pick a value for k (the number of clusters to create)2. Initialize k ‘centroids’ (starting points) in your data3. Create your clusters. Assign each point to the nearest centroid.4. Make your clusters better. Move each centroid to the center of its cluster.5. Repeat steps 3–4 until your centroids converge. Iris Dataset The Iris Dataset consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). It has four features from each sample: length and width of sepals and petals.
###Code
# imports required for this part of the project
from sklearn import datasets
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import seaborn as sns
###Output
_____no_output_____
###Markdown
Explore the Data set
###Code
iris = datasets.load_iris()
df = pd.DataFrame(
iris['data'], columns=iris['feature_names']
).assign(Species=iris['target_names'][iris['target']])
df.head()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df
###Output
_____no_output_____
###Markdown
Get data types in the dataset
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Pair actual plot of dataset
###Code
g = sns.PairGrid(iris.data, hue=iris.target)
g.map_diag(sns.histplot)
g.fig.suptitle("Predicted", y=1.08)
g.map_offdiag(sns.scatterplot)
g.add_legend()
###Output
_____no_output_____
###Markdown
Elblow curve to test for the optimal number of clusters To get right number of cluster for K-means so we neeed to loop from 1 to 20 number of cluster and check score. Elbow method is used to represnt that. Got the code for this from [[10]](https://predictivehacks.com/k-means-elbow-method-code-for-python/)
###Code
distortions = []
K = range(1,10)
for k in K:
kmeanModel = KMeans(n_clusters=k)
kmeanModel.fit(df)
distortions.append(kmeanModel.inertia_)
plt.figure(figsize=(16,8))
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
###Output
_____no_output_____
###Markdown
As we see 3 is optimal number of cluster where score has become constant. so fit and check cluster on 3 class cluster. Implement K Clustering with K=3
###Code
kmeans = KMeans(n_clusters=3)
predict = kmeans.fit_predict(iris.data)
kmeans.cluster_centers_
df['cluster'] = kmeans.labels_
#Frequency distribution of species"
iris_outcome = pd.crosstab(index=df["cluster"], # Make a crosstab
columns="count") # Name the count column
iris_outcome
g = sns.PairGrid(df, hue="cluster")
g.map_diag(sns.histplot)
g.fig.suptitle("Predicted", y=1.08)
g.map_offdiag(sns.scatterplot)
g.add_legend()
###Output
_____no_output_____ |
analysis/.ipynb_checkpoints/census data-checkpoint.ipynb | ###Markdown
Pull census data for the neighborhoods in SeattleUse this link to find tables: https://api.census.gov/data/2018/acs/acs5/variables.html
###Code
import pandas as pd
import censusdata
import csv
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import scipy
from scipy import stats
sample = censusdata.search('acs5', 2018,'concept', 'household income')
print(sample[0])
sample = censusdata.search('acs5', 2018,'concept', 'population')
print(sample[:5])
states = censusdata.geographies(censusdata.censusgeo([('state', '*')]), 'acs5', 2018)
print(states['Washington'])
counties = censusdata.geographies(censusdata.censusgeo([('state', '53'), ('county', '*')]), 'acs5', 2018)
print(counties['King County, Washington'])
###Output
Summary level: 050, state:53> county:033
###Markdown
Collect Population Data for King County
###Code
data = censusdata.download('acs5', 2018,
censusdata.censusgeo([('state', '53'),
('county', '033'),
('tract', '*')]),
['B01003_001E'])
df = data
df = df.reset_index()
df = df.rename(columns={"index": "tract", "B01003_001E":"pop"})
df.head()
#convert object type to string
df['tract']= df['tract'].astype(str)
#Split out uneeded info
df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value
df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains
#convert object type to float
df['tract']= df['tract'].astype(float)
#There may be missing values listed as -666666. Delete those.
df = df[df['B01003_001E'] >= 0]
df.head()
df = df.sort_values(by=['tract'])
df.head()
df.to_csv('pop-by-tract.csv', mode = 'w', index=False)
###Output
_____no_output_____
###Markdown
Collect income data for King County
###Code
data = censusdata.download('acs5', 2018,
censusdata.censusgeo([('state', '53'),
('county', '033'),
('tract', '*')]),
['B19013_001E'])
data.head()
data['tract']=data.index
df = data
#convert object type to string
df['tract']= df['tract'].astype(str)
#Split out uneeded info
df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value
df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains
#convert object type to float
df['tract']= df['tract'].astype(float)
#There may be missing values listed as -666666. Delete those.
df = df[df['B19013_001E'] >= 0]
df.head()
df.to_csv('seattle-census-tract-acs5-2018.csv', mode = 'w', index=False)
#Open the full tract file if needed
df = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8')
###Output
_____no_output_____
###Markdown
Regression for income and LFLs per population for all of SeattleThis merges population and lfl number data for tracts with the income data
###Code
from sklearn.linear_model import LinearRegression
#Open the file if needed
dfinc = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8')
lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8')
pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8')
#Merge with the population dataframe
dfregr = pd.merge(dfinc, pop, on='tract', how='inner')
dfregr.head()
#Merge with the lfl number dataset
dfregr = pd.merge(dfregr, lflvtract, on='tract', how='inner')
dfregr.head()
dfregr['lflperpop'] = dfregr['numlfls']/dfregr['pop']
#In case there are any negative values
dfregr = dfregr[dfregr['household_income'] >= 0]
ax = sns.scatterplot(x="household_income", y="lflperpop", data=dfregr)
ax.set(ylim=(0, 0.005))
x = dfregr[['household_income']]
y = dfregr[['lflperpop']]
model = LinearRegression().fit(x, y)
r_sq = model.score(x, y)
print('coefficient of determination:', r_sq)
print('intercept:', model.intercept_)
print('slope:', model.coef_)
###Output
coefficient of determination: 0.22804078863785715
intercept: [-0.00018629]
slope: [[7.64326001e-09]]
###Markdown
Check for Normality
###Code
#Create list of values to check - standardized number of lfls
lflperpop = dfregr['lflperpop']
plt.hist(lflperpop)
plt.show()
#Create list of values to check - income
income = dfregr['household_income']
plt.hist(income)
plt.show()
###Output
_____no_output_____
###Markdown
Because the number of lfls is not normally distributed, use Spearman's correlation coefficienhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC3576830/:~:text=In%20summary%2C%20correlation%20coefficients%20are,otherwise%20use%20Spearman's%20correlation%20coefficient.
###Code
#Spearmans Correlation for all variables in the table
dfregr.corr(method='spearman')
###Output
_____no_output_____
###Markdown
household_income vs lflperpop is what we're interested in. 0.5 is considered moderate correlation. So moderate positive correlation Use SciPy to calculate spearman with p value for trend
###Code
#The %.3f' sets the number of decimal places
coef, p = stats.spearmanr(dfregr['household_income'],dfregr['lflperpop'])
print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p)
###Output
Spearmans correlation coefficient: 0.502 p-value: 0.000
###Markdown
Collect diversity numbers
###Code
sample = censusdata.search('acs5', 2018,'concept', 'families')
print(sample[0])
# This gets data for total, white, african american, american indian, asian, hawaiian, other, and three combineation categories.
#https://api.census.gov/data/2016/acs/acs5/groups/B02001.html
divdata = censusdata.download('acs5', 2016,
censusdata.censusgeo([('state', '53'),
('county', '033'),
('tract', '*')]),
['B02001_001E',
'B02001_002E',
'B02001_003E',
'B02001_004E',
'B02001_005E',
'B02001_006E',
'B02001_007E',
'B02001_008E',
'B02001_009E',
'B02001_010E'])
divdata.head()
#Create a new dataframe incase something gets messed up
df = divdata
#Rename columns and parse index
df['tract']=df.index
#convert object type to string
df['tract']= df['tract'].astype(str)
#Split out uneeded info
df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value
df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains
#convert object type to float
df['tract']= df['tract'].astype(float)
df.rename(columns={'B02001_001E':'tot','B02001_002E':'wh',
'B02001_003E':'afam',
'B02001_004E':'amin',
'B02001_005E':'as',
'B02001_006E':'hw',
'B02001_007E':'ot',
'B02001_008E':'combo1',
'B02001_009E':'combo2',
'B02001_010E':'combo3'}, inplace=True)
#Drop any rows that have a zero for tot column
df.drop(df[df['tot'] == 0].index, inplace = True)
df.head()
df = df.reset_index()
df = df.drop(columns=['index'])
df.head()
###Output
_____no_output_____
###Markdown
Calculate simpsons index (gini index is 1-simpsons)Simpsons is the sum of the squared category proportions. Lower value is more diverse. Gini-Simpsons, higher value more diverse
###Code
def simpsons(row):
return (row['wh'] / row['tot'])**2 + (row['afam'] / row['tot'])**2 + (row['amin'] / row['tot'])**2 + (row['as'] / row['tot'])**2 + (row['hw'] / row['tot'])**2 + (row['ot'] / row['tot'])**2 + (row['combo1'] / row['tot'])**2 + (row['combo2'] / row['tot'])**2 + (row['combo3'] / row['tot'])**2
df['simpsons'] = df.apply(simpsons, axis=1)
df['gini-simp'] = 1 - df['simpsons']
df.head()
#Save file as csv
df.to_csv('diversity-seattle-census-tract-acs5-2018.csv', mode = 'w', index=False)
###Output
_____no_output_____
###Markdown
Education Levels
###Code
data = censusdata.download('acs5', 2018,
censusdata.censusgeo([('state', '53'),
('county', '033'),
('tract', '*')]),
['B06009_005E', 'B06009_006E'])
data['tract']=data.index
df = data
df['collegePlus'] = df['B06009_005E'] + df['B06009_006E']
df.drop(columns=['B06009_005E','B06009_006E'], inplace=True)
df.head()
#convert object type to string
df['tract']= df['tract'].astype(str)
#Split out uneeded info
df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value
df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains
#convert object type to float
df['tract']= df['tract'].astype(float)
#There may be missing values listed as -666666. Delete those.
df = df[df['collegePlus'] >= 0]
df.head()
df = df.reset_index()
df = df.drop(columns=['index'])
df.head()
df.to_csv('seattle-census-tract-acs5-2018-edu.csv', mode = 'w', index=False)
#Open the edu tract file if needed
#dfedu = df
dfedu = pd.read_csv('seattle-census-tract-acs5-2018-edu.csv',encoding='utf-8')
###Output
_____no_output_____
###Markdown
Calculate correlation between number of lfls per pop and education level
###Code
#Open census tract csv with the counts of lfls
#I used QGIS to make a csv with the number of lfls per area for each census tract
lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8')
#Merge with the lfl number dataset
dfedulfl = pd.merge(lflvtract, dfedu, on='tract', how='inner')
dfedulfl.head()
#Open the population data
pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8')
#Merge with the population dataframe
dfedulfl = pd.merge(dfedulfl, pop, on='tract', how='inner')
dfedulfl.head()
dfedulfl['lflperpop']=dfedulfl['numlfls']/dfedulfl['pop']
ax = sns.scatterplot(x="collegePlus", y="lflperpop", data=dfedulfl)
ax.set(ylim=(0, 0.005))
#Create list of values to check
edu = dfedulfl['collegePlus']
plt.hist(edu)
plt.show()
#Create list of values to check - income
lflperpop = dfedulfl['lflperpop']
plt.hist(lflperpop)
plt.show()
###Output
_____no_output_____
###Markdown
Neither are normal most likely so use Spearman's
###Code
dfedulfl.corr(method='spearman')
###Output
_____no_output_____
###Markdown
A value of 0.15 is very weakly positively correlated
###Code
#Use SciPy
#The %.3f' sets the number of decimal places
coef, p = stats.spearmanr(dfedulfl['collegePlus'],dfedulfl['lflperpop'])
print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p)
###Output
Spearmans correlation coefficient: 0.147 p-value: 0.088
###Markdown
Calculate correlation between number of lfls per pop and diversity
###Code
#Open census tract csv with the counts of lfls
#I used QGIS to make a csv with the number of lfls per area for each census tract
lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8')
#Open the diversity tract file if needed
dfdiv = pd.read_csv('diversity-seattle-census-tract-acs5-2018.csv',encoding='utf-8')
#Open the population data if needed
pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8')
#Merge with the lfl number dataset
dfdivlfl = pd.merge(lflvtract, dfdiv, on='tract', how='inner')
dfdivlfl.head()
#Merge with the population dataframe
dfdivlfl = pd.merge(dfdivlfl, pop, on='tract', how='inner')
dfdivlfl.head()
#Calculate lfls per pop.
dfdivlfl['lflperpop']=dfdivlfl['numlfls']/dfdivlfl['pop']
#Take only the useful columns (here tot is the total number included in diversity calculation)
dfdivlfl = dfdivlfl[['tract','tot','gini-simp','pop','lflperpop']].copy()
#Create list of values to check
div = dfdivlfl['gini-simp']
plt.hist(div)
plt.show()
#Not normal so use Spearman's
dfdivlfl.corr(method='spearman')
###Output
_____no_output_____
###Markdown
gini-simp is moderately and negatively (-0.5) correlated with lfls per population by tract. Suggesting as diversity decreases, the number of lfls increase
###Code
#Use SciPy
#The %.3f' sets the number of decimal places
coef, p = stats.spearmanr(dfdivlfl['gini-simp'],dfdivlfl['lflperpop'])
print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p)
###Output
Spearmans correlation coefficient: -0.505 p-value: 0.000
###Markdown
Calculate average median income for the study neighborhoods I manually listed what census tracts match the neighborhood boundaries (Seattle's community reporting areas).I also got the population by census tract from here: https://www.census.gov/geographies/reference-files/2010/geo/2010-centers-population.html (divide the six number code by 100 to get the tract number
###Code
#Open census tract csv
hoodtracts = pd.read_csv('censustracts-neighborhoods.csv',encoding='utf-8')
#Open the median data if it's not already open
medians = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8')
#Merge the dataframes
dflfl = pd.merge(hoodtracts, medians, on='tract', how='inner')
dflfl.head()
#Open the population data
pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8')
#Merge with the population dataframe
dflfl = pd.merge(dflfl, pop, on='tract', how='inner')
dflfl.head()
#Open census tract csv with the counts of lfls
#I used QGIS to make a csv with the number of lfls per area for each census tract. However, sq km column may be incorrect!!!
lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8')
#Merge with the lfl number dataset
dflfl = pd.merge(lflvtract, dflfl, on='tract', how='inner')
dflfl.head()
dflfl['lflperpop'] = dflfl['numlfls']/dflfl['pop']
dflfl.head()
#Save file as csv
dflfl.to_csv('census-compiled-data.csv', mode = 'w', index=False)
ax = sns.scatterplot(x="household_income", y="lflperpop", data=dflfl)
ax.set(ylim=(0, 0.005))
###Output
_____no_output_____
###Markdown
I used this site for the linear regression: https://realpython.com/linear-regression-in-python/python-packages-for-linear-regressionThe following is a regression only for neighborhoods in the study!
###Code
from sklearn.linear_model import LinearRegression
#Open the file if needed
dflfl = pd.read_csv('census-compiled-data.csv',encoding='utf-8')
x = dflfl[['household_income']]
y = dflfl[['lflperpop']]
model = LinearRegression().fit(x, y)
r_sq = model.score(x, y)
print('coefficient of determination:', r_sq)
print('intercept:', model.intercept_)
print('slope:', model.coef_)
###Output
coefficient of determination: 0.12185556404062636
intercept: [9.16328386e-05]
slope: [[7.57601413e-09]]
###Markdown
Examine lfls and neighborhoods
###Code
#Open the census compiled data if needed
dflfl = pd.read_csv('census-compiled-data.csv',encoding='utf-8')
dflflhood = dflfl.groupby('neighborhood').agg({'household_income': ['mean'], 'pop': ['sum'], 'numlfls':['sum']})
# rename columns
dflflhood.columns = ['avg-median-income', 'pop', 'numlfls']
# reset index to get grouped columns back
dflflhood = dflflhood.reset_index()
dflflhood.head(8)
###Output
_____no_output_____
###Markdown
Diversity
###Code
#If necessary. Open census tract csv with the counts of lfls
#I used QGIS to make a csv with the number of lfls per area for each census tract
lflvtract = pd.read_csv('censustracts-neighborhoods.csv',encoding='utf-8')
#open up the diversity data
dfdiv = pd.read_csv('diversity-seattle-census-tract-acs5-2018.csv',encoding='utf-8')
#Merge with the lfl number dataset
dflfldiv = pd.merge(dfdiv, lflvtract, on='tract', how='inner')
dflfldiv = dflfldiv.drop(columns=['simpsons','gini-simp'])
dflfldiv.head()
dflfldiv = dflfldiv.groupby('neighborhood').agg({'tot': ['sum'], 'wh': ['sum'], 'afam':['sum'], 'amin':['sum'], 'as':['sum'], 'hw':['sum'], 'ot':['sum'], 'combo1':['sum'], 'combo2':['sum'], 'combo3':['sum']})
# rename columns
dflfldiv.columns = ['tot', 'wh', 'afam', 'amin', 'as', 'hw', 'ot', 'combo1', 'combo2', 'combo3']
# reset index to get grouped columns back
dflfldiv = dflfldiv.reset_index()
dflfldiv.head(8)
#Calculate Simpsons and gini
def simpsons(row):
return (row['wh'] / row['tot'])**2 + (row['afam'] / row['tot'])**2 + (row['amin'] / row['tot'])**2 + (row['as'] / row['tot'])**2 + (row['hw'] / row['tot'])**2 + (row['ot'] / row['tot'])**2 + (row['combo1'] / row['tot'])**2 + (row['combo2'] / row['tot'])**2 + (row['combo3'] / row['tot'])**2
dflfldiv['simpsons'] = dflfldiv.apply(simpsons, axis=1)
dflfldiv['gini-simp'] = 1 - dflfldiv['simpsons']
dflfldiv.head(8)
#Save file as csv
dflfldiv.to_csv('census-lflhood-compiled-data.csv', mode = 'w', index=False)
#Merge diversity and income tables
dfsocioecon = pd.merge(dflflhood, dflfldiv, on='neighborhood', how='inner')
dfsocioecon.head()
#Save file
dfsocioecon.to_csv('socioeconomic-by-neighborhood.csv', mode = 'w', index=False)
###Output
_____no_output_____ |
documentation/notebooks/clean_notebooks_for_examples.ipynb | ###Markdown
Needed functionalityRemoving ex from end of coding lineRemoving ex commentRemove comment remove_nextRemove cells with:" To avoid duplication - do not run ex"and"display(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) ex"
###Code
type(test.cells[0])
new_base = {k:v for k,v in test.iteritems() if k != 'cells'}
new_base['cells'] = []
for cell in test.cells:
if remove_cell_with(cell['source'],'# To avoid duplication'):
new_lines = []
for line in iterlines(cell['source']):
new_line = remove_ex_comment(line)
new_line = remove_ex(new_line)
new_line = remove_line_with(new_line, '#remove_next')
new_lines.append(new_line)
new_source = combine_lines(new_lines)
new_cell = {k:v for k,v in cell.iteritems() if k != u'source'}
new_cell[u'source'] = new_source
new_cell = nbformat.NotebookNode(new_cell)
new_base['cells'].append(new_cell)
type(new_base['cells'][0])
new = nbformat.NotebookNode(new_base)
nbformat.write(new,'/home/carl/Documents/Code/Projects/PyscesToolbox/documentation/notebooks/SymCA_test.ipynb')
def remove_ex_comment(line):
if line.startswith('#') and '#ex' in line:
return ''
else:
return line
def remove_line_with(line, pattern):
if pattern in line:
return ''
else:
return line
def remove_ex(line):
return line.replace('#ex','')
def remove_cell_with(cell, pattern):
if pattern in cell:
return None
else:
return cell
# new_lines = []
# for line in iterlines(test.cells[0]['source']):
# new_line = remove_ex_comment(line)
# new_line = remove_ex(new_line)
# new_lines.append(new_line)
def iterlines(text):
lines = []
current_line = ''
for char in text:
current_line = current_line + char
if char == '\n':
lines.append(current_line)
current_line = ''
lines.append(current_line)
return lines
def combine_lines(lines):
new = ''
for each in lines:
new = new + each
return new
x = 'asd\nasdwqeqwe\nasdwewrwqr\nwiioasdoisad'
print x.split('\n')
x[-1]
x
[line + '\n' for line in x.split('\n')]
def iterlines(text):
"""
"""
lines = text.split('\n')
if text[-1] == '\n':
lines = [line + '\n' for line in lines[:-1]]
return lines
else:
lines = [line + '\n' for line in lines[:-1]] + [lines[-1]]
return lines
iterlines(x)
from sys import path
from os import path
path.splitext('/home/carl/Documents/Code/Projects/PyscesToolbox/documentation/notebooks/SymCA_test.ipynb')
###Output
_____no_output_____
###Markdown
Needed functionalityRemoving ex from end of coding lineRemoving ex commentRemove comment remove_nextRemove cells with:" To avoid duplication - do not run ex"and"display(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) ex"
###Code
type(test.cells[0])
new_base = {k:v for k,v in test.iteritems() if k != 'cells'}
new_base['cells'] = []
for cell in test.cells:
if remove_cell_with(cell['source'],'# To avoid duplication'):
new_lines = []
for line in iterlines(cell['source']):
new_line = remove_ex_comment(line)
new_line = remove_ex(new_line)
new_line = remove_line_with(new_line, '#remove_next')
new_lines.append(new_line)
new_source = combine_lines(new_lines)
new_cell = {k:v for k,v in cell.iteritems() if k != u'source'}
new_cell[u'source'] = new_source
new_cell = nbformat.NotebookNode(new_cell)
new_base['cells'].append(new_cell)
type(new_base['cells'][0])
new = nbformat.NotebookNode(new_base)
nbformat.write(new,'/home/carl/Documents/Code/Projects/PyscesToolbox/documentation/notebooks/SymCA_test.ipynb')
def remove_ex_comment(line):
if line.startswith('#') and '#ex' in line:
return ''
else:
return line
def remove_line_with(line, pattern):
if pattern in line:
return ''
else:
return line
def remove_ex(line):
return line.replace('#ex','')
def remove_cell_with(cell, pattern):
if pattern in cell:
return None
else:
return cell
# new_lines = []
# for line in iterlines(test.cells[0]['source']):
# new_line = remove_ex_comment(line)
# new_line = remove_ex(new_line)
# new_lines.append(new_line)
def iterlines(text):
lines = []
current_line = ''
for char in text:
current_line = current_line + char
if char == '\n':
lines.append(current_line)
current_line = ''
lines.append(current_line)
return lines
def combine_lines(lines):
new = ''
for each in lines:
new = new + each
return new
x = 'asd\nasdwqeqwe\nasdwewrwqr\nwiioasdoisad'
print x.split('\n')
x[-1]
x
[line + '\n' for line in x.split('\n')]
def iterlines(text):
"""
"""
lines = text.split('\n')
if text[-1] == '\n':
lines = [line + '\n' for line in lines[:-1]]
return lines
else:
lines = [line + '\n' for line in lines[:-1]] + [lines[-1]]
return lines
iterlines(x)
from sys import path
from os import path
path.splitext('/home/carl/Documents/Code/Projects/PyscesToolbox/documentation/notebooks/SymCA_test.ipynb')
###Output
_____no_output_____ |
test/.ipynb_checkpoints/bert-checkpoint.ipynb | ###Markdown
数据处理使用bert模型对短文本数据进行embedding
###Code
import pandas as pd
import numpy as np
import matplotlib
# 读取标题数据
title_data = pd.read_csv("../data/title.csv")
###Output
_____no_output_____
###Markdown
初步去重- 简单dropna/drop_duplicates- 保留长度小于512的数据- 微博数据清洗:去除@xxx等等- 正则表达式去除非汉字- 最后只保留非空的处理数据
###Code
print("original data shape:",title_data.shape)
# 初步去重
title_data.dropna(axis=0,how='any')
unique_title_data = title_data.dropna(axis=0,how='any').drop_duplicates(subset='text')
print("drop_duplicates data shape:",unique_title_data.shape)
#unique_title_data["text"].str.len().hist(bins=200)
# 过滤特别长的一些数据
short_unique_title_data = unique_title_data[unique_title_data['text'].str.len()<512]
print("short drop_duplicates data shape:",short_unique_title_data.shape)
short_unique_title_data["text"].str.len().hist(bins=512)
# for idx in short_unique_title_data["text"].str.len().sort_values().index.tolist()[-100:]:
# print(idx,short_unique_title_data["text"][idx])
from multiprocessing import Pool
from pandarallel import pandarallel
import os, time, random
from weibo_preprocess_toolkit import WeiboPreprocess
from joblib import Parallel, delayed
def text_preprocess(data):
data.replace(' ','')
return
# 微博数据预处理
def data_preprocess(data):
preprocess = WeiboPreprocess()
start = time.time()
clean_data = data['text'].parallel_map(preprocess.clean)
end = time.time()
print('Task runs %0.2f seconds.' %(end - start))
return clean_data
if __name__=='__main__':
pandarallel.initialize()
psutd = short_unique_title_data.copy()
psutd['clean'] = data_preprocess(psutd)
# psutd['clean'] = psutd['clean'].parallel_map(replace(' ',''))
# 正则表达式只保留汉字
%%time
import re
# \s
psutd['clean'] = [re.sub("[^\u4e00-\u9fa5]",'',ctext) for ctext in psutd['clean'].tolist()]
psutd = psutd[psutd['clean'].str.len()>1]
psutd = psutd.drop_duplicates(subset='clean')
print("clean data shape:",psutd.shape)
###Output
_____no_output_____
###Markdown
下面是simhash文本去重环节> 因为python计算这部分比较慢,所以没有继续
###Code
# 多进程结巴分词
%%time
import jieba
jieba.enable_parallel(8)
seg_list = [jieba.lcut(text) for text in psutd['clean']]
# 计算simhash值
%%time
from simhash import Simhash as SH
SH(seg_list[0]).value
simhash_list = [SH(seg) for seg in seg_list]
###Output
_____no_output_____
###Markdown
simhash矩阵python计算过于缓慢,之后可能考虑c++/cuda调用
###Code
# 过于缓慢
# %%time
# uset={}
# sim_list_len = len(simhash_list)
# flag_list = [range(sim_list_len)]
# pair_list = []
# for idx in range(sim_list_len):
# for pair in range(idx,sim_list_len):
# if (simhash_list[idx].distance(simhash_list[pair])<5):
# pair_list.append((idx,pair))
###Output
_____no_output_____
###Markdown
数据分析- 数值特征分析- bert生成embedding- 并查集分析&相似矩阵分析
###Code
psutd['clean'].str.len().hist(bins=512)
print(psutd['clean'].str.len().mean())
print(psutd['clean'].str.len().median())
print(psutd.iloc[0])
# for idx in psutd["clean"].str.len().sort_values().index.tolist()[-10:]:
# print(idx,psutd["clean"][idx])
###Output
_____no_output_____
###Markdown
载入bert-as-service这里选择的是google-bert-base模型,在命令行启动
###Code
import tensorflow as tf
print("TF version is",tf.__version__)
from bert_serving.client import BertClient
bc = BertClient()
# print(bc.encode(['First do it', '今天天气不错', 'then do it better']))
###Output
_____no_output_____
###Markdown
测试bert模型
###Code
# bert test
from sklearn.metrics.pairwise import pairwise_distances as PD
vec = bc.encode(['外交部召见美国驻华大使提出严正交涉敦促美方纠正错误停止利用涉港问题干涉中国内政中国外交部副部''今天天气不错今天天气不错今天天气不错今天天气不错今天天气不错今天天气不错','今天天气不错','亚洲球员在多重看这在上之后武磊二个赛季遭遇前所级区发机会 但是前 轮联赛颗粒无收 当然 这也与西甲联赛一属性有关 历史上能够真正立足西甲联赛的亚洲球员屈指可数 目前西甲联赛也只有中日韩 名球员效力 其馀三大亚洲球星更是只能委身西乙联赛 △目前 从西班牙职业联赛的亚洲球员看 日本球员还是占据主流 名国脚都在西甲或是西乙联赛效力 从球员基数看 日本球员整体适应能力确实了得 良好的职业态度和扎实的基本功 让他们在西班牙联','亚洲球员在西甲分量有多重在上赛季初试身手之后武磊在留洋西甲的第二个赛季遭遇前所未有的困难西班牙人队深陷降级区武磊虽然获得不少首发机会 但是前 轮联赛颗粒无收 当然 这也与西甲联赛一属性有关 历史上能够真正立足西甲联赛的亚洲球员屈指可数 目前西甲联赛也只有中日韩 名球员效力 其馀三大亚洲球星更是只能委身西乙联赛 △目前 从西班牙职业联赛的亚洲球员看 日本球员还是占据主流 名国脚都在西甲或是西乙联赛效力 从球员基数看 日本球员整体适应能力确实了得 良好的职业态度和扎实的基本功 让他们在西班牙联赛获'])
print(vec)
print(PD(vec,vec,n_jobs=8))
matplotlib.pyplot.matshow(ED(vec,vec))
###Output
_____no_output_____
###Markdown
调用bert-service服务计算,可能会花费10分钟甚至更久> 300K数据,max_seq_len=64,双P40耗时10分钟左右
###Code
%%time
clean_vec = bc.encode(psutd["clean"].tolist())
print(clean_vec.shape)
###Output
_____no_output_____
###Markdown
将向量保存为二进制数据
###Code
with open("../data/hk_nodes",'wb') as bin_output:
clean_vec.tofile(bin_output)
###Output
_____no_output_____
###Markdown
对全体向量进行二维PCA分析
###Code
from sklearn.decomposition import PCA
pca = PCA(2)
clean_pca2 = pca.fit_transform(clean_vec)
matplotlib.pyplot.scatter(clean_pca2[:,0],clean_pca2[:,1],alpha=0.2)
###Output
_____no_output_____
###Markdown
调用邻接边计算程序,同时得到并查集
###Code
%%time
node_num = clean_vec.shape[0]
node_dim = clean_vec.shape[1]
threshold = 18.0
os.system(' '.join(["cd ../Kluster; cd bin; ./linker ../data/hk_nodes ../data/hk_edges.csv",str(node_num),str(node_dim),str(threshold)]))
hk_edge = pd.read_csv("../Kluster/data/hk_edges..csv")
hk_edge
hk_edge['distance'].hist(bins=200)
###Output
_____no_output_____
###Markdown
分析向量的相似程度
###Code
%%time
edm = PD(clean_vec[:1000],clean_vec[:1000],n_jobs=8)
print(edm)
matplotlib.pyplot.matshow(edm)
###Output
_____no_output_____
###Markdown
读取&分析并查集结果
###Code
def read_set(path):
disjoint_set={}
with open(path,'r') as set_file:
set_lines = set_file.readlines()
set_lines = set_lines[1:]
for line in set_lines:
line = line[:-2]
set_id = int(line.split(':')[0])
disjoint_set[set_id]=[int(node) for node in line.split(':')[1].split(',')]
return disjoint_set
%%time
disjoint_set = read_set("../data/set.txt")
len(disjoint_set)
###Output
_____no_output_____
###Markdown
找出最大的并查集
###Code
%%time
disjoint_set = read_set("../Kluster/data/set.txt")
biggest_set = 0
bs_len = 1
for set_id,node_list in disjoint_set.items():
if len(node_list)>bs_len:
biggest_set = set_id
bs_len = len(node_list)
print(bs_len)
print(disjoint_set[biggest_set])
###Output
_____no_output_____
###Markdown
找到最大并查集中的项,分析其相似性
###Code
set_vec = [clean_vec[vec_id] for vec_id in disjoint_set[biggest_set]]
edm = ED(set_vec[:1000],set_vec[:1000])
print(edm)
matplotlib.pyplot.matshow(edm)
###Output
_____no_output_____
###Markdown
对比双十一数据
###Code
csv_data = pd.read_csv("../data/double11_1020_1120.csv")
csv_data.fillna(0.0,inplace=True)
csv_data *= 100.0
csv_data_u = csv_data.round(5).drop_duplicates(subset=csv_data.columns[1:],keep='first')
# csv_data_u = csv_data_u.sample(n=65536, frac=None, replace=False, weights=None, random_state=None, axis=0)
csv_data_u_cut = csv_data_u.iloc[:,1:]
csv_data_u_float = csv_data_u_cut.astype('float32')
print(csv_data_u_float.shape)
# for x in csv_data_u_float.duplicated():
# if (x is True):
# print("duplication exist")
# break
# 2进制数组
with open("../data/eco_nodes",'wb') as bin_output:
csv_data_u_float.values.tofile(bin_output)
# with open("../Kluster/data/eco_nodes.csv",'w') as csv_output:
# csv_data_u.to_csv(csv_output)
%%time
node_num_c = csv_data_u_float.shape[0]
node_dim_c = csv_data_u_float.shape[1]
threshold_c = 0.1
os.system(' '.join(["cd ..; cd bin; ./linker ../data/eco_nodes ../data/eco_edges.csv",str(node_num_c),str(node_dim_c),str(threshold_c)]))
eco_edge = pd.read_csv("../Kluster/data/eco_edges.csv")
eco_edge['distance'].hist(bins=200)
%%time
disjoint_set = read_set("../Kluster/data/set.txt")
biggest_set = 0
bs_len = 1
for set_id,node_list in disjoint_set.items():
if len(node_list)>bs_len:
biggest_set = set_id
bs_len = len(node_list)
print(bs_len)
print(disjoint_set[biggest_set])
set_vec = [csv_data_u_float.iloc[vec_id] for vec_id in disjoint_set[biggest_set]]
edm = ED(set_vec[:1000],set_vec[:1000])
print(edm)
matplotlib.pyplot.matshow(edm)
###Output
_____no_output_____ |
network-operations/02-training-and-deployment.ipynb | ###Markdown
Network Operations Demo - Train, Test, and DeployThis project demonstrates how to build an automated machine-learning (ML) pipeline for predicting network outages based on network-device telemetry. This notebook is the second part (out of 2) of the demo. This part demonstrates how to train, test and deploy a model and use offline and real-time data from the feature store.**In this notebook:*** **Create a Feature Vector that consists of data joined from the three feature sets you created*** **Create an offline dataset from the feature vector to feed the ML training process*** **Run automated ML Pipeline which train, test, and deploy the model*** **Test the deployed real-time serving function**When you finish this notebook, you should have a running network-device failure prediction system. Get and init the MLRun project
###Code
import os
import numpy as np
import mlrun
import mlrun.feature_store as fstore
# Create the project
project = mlrun.get_or_create_project('network-operations', "./", user_project=True)
###Output
> 2022-02-10 13:56:05,467 [info] loaded project network-operations from MLRun DB
###Markdown
Create a new Feature VectorThe goal is to create a single dataset that contain datas from the static devices dataset, the device metrics, and the labels.You'll define a **Feature Vector** and specify the desired features. When the vector is retrieved the feature store automatically and correctly joins the data from the different feature sets based on the entity (index) keys and the timestamp values.To define and save the `device_features` feature vector
###Code
# Define the `device_features` Feature Vector
fv = fstore.FeatureVector('device_features',
features=['device_metrics.*', 'static.*'],
label_feature='device_labels.is_error')
# Save the Feature Vector to MLRun's feature store DB
fv.save()
###Output
_____no_output_____
###Markdown
Get an offline dataset for the feature vectorOnce you have defined the feature vector and ingested some data, you can request the feature store to create an offline dataset, e.g. a snapshot of the data between the dates you want available to be loaded as parquet or csv files or as a pandas Dataframe.you can later reference the created offline dataset via a special artifact url (`fv.url`).**Make sure you run this AFTER the feature set data was ingested (using batch or real-time)**
###Code
# Request (get or create) the offline dataset from the feature store and save to a parquet target
dataset_ref = fstore.get_offline_features(fv, target=mlrun.datastore.targets.ParquetTarget())
# Get the generated offline dataset as a pandas DataFrame
dataset = dataset_ref.to_dataframe()
print("\nTraining set shape:", dataset.shape)
dataset.head()
# Verify that the dataset contains proper labels (must have both True & False values)
unique = dataset.is_error.unique()
assert len(unique) == 2, "dataset does not contain both label values. ingest a bigger dataset"
###Output
_____no_output_____
###Markdown
Model training and deployment using the feature vectorNow that the dataset is ready for training, you need to define the model training, testing and deployment process.Build an automated ML pipeline that uses pre-baked serverless training, testing and serving functions from [MLRun's functions marketplace](https://www.mlrun.org/marketplace/). The pipeline has three steps:* Train a model using data from the feature vector you created and save it to the model registry* Run model test/evaluation with a portion of the data* Deploy a real-time serving function that uses the newly trained model, and enrich/impute the features with data from the real-time feature vector You can see the [**workflow code**](./src/workflow.py). You can run this workflow locally, in a CI/CD framework, or over Kubeflow. In practice you can create different workflows for development and production.The workflow/pipeline can be executed using the MLRun SDK (`project.run()` method) or using CLI commands (`mlrun project`), and can run directly from the source repo (GIT). See details in MLRun [**Projects and Automation documentation**](https://docs.mlrun.org/en/latest/projects/overview.html).When you run the workflow you can set arguments and destination for the different artifacts. The pipeline progress is shown in the notebook. Alternatively you can check the progress, logs, artifacts, etc. in the MLRun UI.If you want to run the same using CLI, type:```python mlrun project -n myproj -r ./src/workflow.py .```
###Code
model_name = "netops"
# run the workflow
run_id = project.run(
workflow_path="./src/workflow.py",
arguments={"vector_uri": fv.uri, "model_name": model_name},
watch=True)
###Output
_____no_output_____
###Markdown
Test the Live Model EndpointTo test the live model endpoint, first grab a list of IDs from the static feature set it produced. Then use these IDs and send them through a loop to the live endpoint. Grab IDs from the static devices table
###Code
# Load the static feature set
fset = fstore.get_feature_set('static')
# Get a dataframe from the feature set
devices = fset.to_dataframe().reset_index()['device'].values
print('Devices sample:', devices[:4])
###Output
Devices sample: ['5366904160408' '4366213343194' '5300819942199' '7294710463338']
###Markdown
Send a sample ID to the model endpoint
###Code
serving_fn = project.get_function('serving')
serving_fn.invoke(path=f'/v2/models/{model_name}/infer', body={'inputs': [[devices[0]]]})
###Output
> 2022-02-10 14:05:05,843 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
###Markdown
Continously send IDs to the model
###Code
import random
import time
MSGS_TO_SEND = 5
IDS_PER_MSG = 2
TIMEOUT_BETWEEN_SENDS = 10
for i in range(MSGS_TO_SEND):
ids_for_prediction = [[random.choice(devices)] for i in range(IDS_PER_MSG)]
resp = serving_fn.invoke(path=f'/v2/models/{model_name}/infer', body={'inputs': ids_for_prediction})
print('Sent:', ids_for_prediction)
print('Response:', resp)
print('Predictions:', list(zip(ids_for_prediction, resp['outputs'])))
time.sleep(TIMEOUT_BETWEEN_SENDS)
###Output
> 2022-02-10 14:05:07,665 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['5366904160408'], ['9089787659244']]
Response: {'id': '5fab5012-31e1-4c93-99f7-910c5e10df93', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['5366904160408'], False), (['9089787659244'], False)]
> 2022-02-10 14:05:17,709 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['7190575638226'], ['6456808756864']]
Response: {'id': '81ac75f1-00d7-412d-8eab-8d8e5d3719b1', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['7190575638226'], False), (['6456808756864'], False)]
> 2022-02-10 14:05:27,755 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['6796821902797'], ['5300819942199']]
Response: {'id': '882ffa3f-cb1c-40ae-915b-8e80810c3a49', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['6796821902797'], False), (['5300819942199'], False)]
> 2022-02-10 14:05:37,801 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['5366904160408'], ['2133702096887']]
Response: {'id': '43b458d7-8e59-4598-95b3-f0c350a20ca3', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['5366904160408'], False), (['2133702096887'], False)]
> 2022-02-10 14:05:47,851 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['5021644823083'], ['7453742823111']]
Response: {'id': 'cdde1873-b9c0-483a-8771-4eae5f2a480d', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['5021644823083'], False), (['7453742823111'], False)]
###Markdown
Network Operations Demo - Train, Test, and DeployThis project demonstrates how to build an automated machine-learning (ML) pipeline for predicting network outages based on network-device telemetry. This notebook is the second part (out of 2) of the demo. This part demonstrates how to train, test and deploy a model and use offline and real-time data from the feature store.**In this notebook:*** **Create a Feature Vector that consists of data joined from the three feature sets you created*** **Create an offline dataset from the feature vector to feed the ML training process*** **Run automated ML Pipeline which train, test, and deploy the model*** **Test the deployed real-time serving function**When you finish this notebook, you should have a running network-device failure prediction system. Get and init the MLRun project
###Code
import os
import numpy as np
import mlrun
import mlrun.feature_store as fstore
# Create the project
project = mlrun.get_or_create_project('network-operations', "./", user_project=True)
###Output
> 2022-03-21 15:29:29,500 [info] loaded project network-operations from MLRun DB
###Markdown
Create a new Feature VectorThe goal is to create a single dataset that contain datas from the static devices dataset, the device metrics, and the labels.You'll define a **Feature Vector** and specify the desired features. When the vector is retrieved the feature store automatically and correctly joins the data from the different feature sets based on the entity (index) keys and the timestamp values.To define and save the `device_features` feature vector
###Code
# Define the `device_features` Feature Vector
fv = fstore.FeatureVector('device_features',
features=['device_metrics.*', 'static.*'],
label_feature='device_labels.is_error')
# Save the Feature Vector to MLRun's feature store DB
fv.save()
###Output
_____no_output_____
###Markdown
Get an offline dataset for the feature vectorOnce you have defined the feature vector and ingested some data, you can request the feature store to create an offline dataset, e.g. a snapshot of the data between the dates you want available to be loaded as parquet or csv files or as a pandas Dataframe.you can later reference the created offline dataset via a special artifact url (`fv.url`).**Make sure you run this AFTER the feature set data was ingested (using batch or real-time)**
###Code
# Request (get or create) the offline dataset from the feature store and save to a parquet target
dataset_ref = fstore.get_offline_features(fv, target=mlrun.datastore.targets.ParquetTarget())
# Get the generated offline dataset as a pandas DataFrame
dataset = dataset_ref.to_dataframe()
print("\nTraining set shape:", dataset.shape)
dataset.head()
# Verify that the dataset contains proper labels (must have both True & False values)
unique = dataset.is_error.unique()
assert len(unique) == 2, "dataset does not contain both label values. ingest a bigger dataset"
###Output
_____no_output_____
###Markdown
Model training and deployment using the feature vectorNow that the dataset is ready for training, you need to define the model training, testing and deployment process.Build an automated ML pipeline that uses pre-baked serverless training, testing and serving functions from [MLRun's functions marketplace](https://www.mlrun.org/marketplace/). The pipeline has three steps:* Train a model using data from the feature vector you created and save it to the model registry* Run model test/evaluation with a portion of the data* Deploy a real-time serving function that uses the newly trained model, and enrich/impute the features with data from the real-time feature vector You can see the [**workflow code**](./src/workflow.py). You can run this workflow locally, in a CI/CD framework, or over Kubeflow. In practice you can create different workflows for development and production.The workflow/pipeline can be executed using the MLRun SDK (`project.run()` method) or using CLI commands (`mlrun project`), and can run directly from the source repo (GIT). See details in MLRun [**Projects and Automation documentation**](https://docs.mlrun.org/en/latest/projects/overview.html).When you run the workflow you can set arguments and destination for the different artifacts. The pipeline progress is shown in the notebook. Alternatively you can check the progress, logs, artifacts, etc. in the MLRun UI.If you want to run the same using CLI, type:```python mlrun project -n myproj -r ./src/workflow.py .```
###Code
model_name = "netops"
# run the workflow
run_id = project.run(
workflow_path="./src/workflow.py",
arguments={"vector_uri": fv.uri, "model_name": model_name},
watch=True)
###Output
_____no_output_____
###Markdown
Test the Live Model EndpointTo test the live model endpoint, first grab a list of IDs from the static feature set it produced. Then use these IDs and send them through a loop to the live endpoint. Grab IDs from the static devices table
###Code
# Load the static feature set
fset = fstore.get_feature_set('static')
# Get a dataframe from the feature set
devices = fset.to_dataframe().reset_index()['device'].values
print('Devices sample:', devices[:4])
###Output
Devices sample: ['0517655866735' '2371451183019' '4541123052929' '8664701497798']
###Markdown
Send a sample ID to the model endpoint
###Code
serving_fn = project.get_function('serving')
serving_fn.invoke(path=f'/v2/models/{model_name}/infer', body={'inputs': [[devices[0]]]})
###Output
> 2022-03-21 15:30:18,714 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
###Markdown
Continously send IDs to the model
###Code
import random
import time
MSGS_TO_SEND = 5
IDS_PER_MSG = 2
TIMEOUT_BETWEEN_SENDS = 10
for i in range(MSGS_TO_SEND):
ids_for_prediction = [[random.choice(devices)] for i in range(IDS_PER_MSG)]
resp = serving_fn.invoke(path=f'/v2/models/{model_name}/infer', body={'inputs': ids_for_prediction})
print('Sent:', ids_for_prediction)
print('Response:', resp)
print('Predictions:', list(zip(ids_for_prediction, resp['outputs'])))
time.sleep(TIMEOUT_BETWEEN_SENDS)
###Output
> 2022-03-21 15:30:18,776 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['4171338169441'], ['7478932327231']]
Response: {'id': '5dd52d95-5e81-48f2-999c-3c56142f750f', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['4171338169441'], False), (['7478932327231'], False)]
> 2022-03-21 15:30:28,833 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['4582020878559'], ['9861411520919']]
Response: {'id': '2583ce2d-c7b6-4d67-af21-09af55b196ed', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['4582020878559'], False), (['9861411520919'], False)]
> 2022-03-21 15:30:38,885 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['8218878184161'], ['8490822166617']]
Response: {'id': '1cf42e41-8a05-45b3-b635-9fba3bb695b3', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['8218878184161'], False), (['8490822166617'], False)]
> 2022-03-21 15:30:48,934 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['1761764880917'], ['0444798971455']]
Response: {'id': '7cf7ed9c-0ef8-42dc-b1d6-e268365e44cd', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['1761764880917'], False), (['0444798971455'], False)]
> 2022-03-21 15:30:58,985 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-network-operations-orz-serving.default-tenant.svc.cluster.local:8080/v2/models/netops/infer'}
Sent: [['3891062846664'], ['0321416570712']]
Response: {'id': '1cdf50db-42cd-4d10-b9af-3ed728917a59', 'model_name': 'netops', 'outputs': [False, False]}
Predictions: [(['3891062846664'], False), (['0321416570712'], False)]
|
utils/Getting simple demo data.ipynb | ###Markdown
Processing the open food databse to extract a small number of representative items to use as demonstration for the visualization.
###Code
import pandas as pd
data_dir = "/Users/seddont/Dropbox/Tom/MIDS/W209_work/Tom_project/"
###Output
_____no_output_____
###Markdown
Working from the full database, because the usda_imports_filtered.csv file in the shared drive does not have brand information, which will be useful for displaying.
###Code
# Get sample of the full database to understand what columns we want
smp = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t", nrows = 100)
for c in smp.columns:
print(c)
# Specify what columns we need for the demonstration visualizations
demo_cols = ['code', 'creator', 'product_name', 'generic_name', 'quantity',
'brands', 'brands_tags', 'categories', 'categories_tags', 'serving_size',
'serving_size', 'energy_100g', 'energyfromfat_100g', 'fat_100g',
'saturatedfat_100g', 'monounsaturatedfat_100g',
'polyunsaturatedfat_100g', 'omega3fat_100g', 'omega6fat_100g',
'omega9fat_100g', 'oleicacid_100g', 'transfat_100g', 'cholesterol_100g',
'carbohydrates_100g', 'sugars_100g', 'sucrose_100g', 'glucose_100g',
'fructose_100g', 'lactose_100g', 'maltose_100g', 'starch_100g',
'fiber_100g', 'proteins_100g', 'salt_100g', 'sodium_100g',
'alcohol_100g', 'vitamina_100g', 'betacarotene_100g', 'vitamind_100g',
'vitamine_100g', 'vitamink_100g', 'vitaminc_100g', 'vitaminb1_100g',
'vitaminb2_100g', 'vitaminpp_100g', 'vitaminb6_100g', 'vitaminb9_100g',
'folates_100g', 'vitaminb12_100g', 'bicarbonate_100g', 'potassium_100g',
'chloride_100g', 'calcium_100g', 'iron_100g', 'fluoride_100g',
'iodine_100g', 'caffeine_100g', 'cocoa_100g',
'ingredients_list']
# Create a list of columns to drop
drop_cols = [c for c in smp.columns if c not in demo_cols]
print(drop_cols)
# Pull in full dataset
df = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t")
# Drop unwanted columns
df.drop(drop_cols, axis = 1, inplace = True)
# Take a quick look
df
# Drop all rows that are not from the usda ndb import
df = df[df.creator == "usda-ndb-import"]
df
###Output
_____no_output_____
###Markdown
Now down to a manageable number of rows and columns. Going to explore for a few typical items to use as demo data. Let's take a look at donuts, crackers and cereal -- the three categories used in the paper prototype.
###Code
df[df["product_name"].str.lower().str.contains("baked donut", na = False)]
df[df["product_name"].str.lower().str.contains("cracker", na = False)]
df[df["product_name"].str.lower().str.contains("cereal", na = False)]
###Output
_____no_output_____
###Markdown
Looks like there are plenty of options for these. For demo purposes I want to pick 12 of each with a reasonable range of variation on the key factors of sugar, fat, sodium, protein, so that I can have one plus up to 11 comparison products.
###Code
# reminder on column names remaining
df.columns
###Output
_____no_output_____
###Markdown
Now going to go through and find items that have certain category words in the product name. Then filter these to exclude the most often word that is confused in there (e.g. donut flavor coffee gets picked up under donut).Then going to sort each of these based on the rank of items on key factors like sugar. And for each factor, going to pick items that are at specified percentiles, so we get a wide range on those factors.
###Code
# Words we want to find that indicate product type
cat_words = ["donut", "cracker", "cereal"]
# Some of these generate confusion, so also have an 'exclude' dictionary
# This is pretty crude, but seems ok for generating demo
exclude_dict = {"donut": "coffee",
"cracker": "Nut",
"cereal": "Bar"}
# What we want to get variation on
pick_factors = ['fat_100g', 'sugars_100g', 'proteins_100g', 'sodium_100g']
# Points we want to pick (percentiles). Can tune this to get more or fewer picks.
pick_percentiles = [0.1, 0.5, 0.9]
# pick_percentiles = [0, 0.25, 0.5, 0.75, 1.0]
demo_picks = []
for cat in cat_words:
# first get all the items containing the cat word
catf = df[df["product_name"].str.lower().str.contains(cat, na = False)]
# then exclude any of these that contain the relevant exclude word
catf = catf[~catf["product_name"].str.lower().str.contains(exclude_dict[cat], na = False)]
# Identify what rank each product is in that category, for each main factor
for p in pick_factors:
catf[p + "_rank"] = catf[p].rank(method = "first")
# Select five products, at quintiles on each
high = catf[p + "_rank"].max()
pick_index = [max(1, round(n * high)) for n in pick_percentiles]
demo_picks.extend(catf[catf[p+"_rank"].isin(pick_index)].code)
demo_df = df[df.code.isin(demo_picks)]
# Add in category identifier
demo_df["demo_cat"] = "None"
for w in cat_words:
is_cat = demo_df.product_name.str.lower().str.contains(w)
demo_df["demo_cat"][is_cat] = w
# Take a look at what we built
demo_df
# Now write it out to disk
outfile = "demo_food_data.csv"
demo_df.to_csv(data_dir+outfile)
###Output
_____no_output_____ |
Code/IPython/bootcamp_advgraphics_seaborn.ipynb | ###Markdown
Graphics using SeabornWe previously have covered how to do some basic graphics using `matplotlib`. In this notebook we introduce a package called `seaborn`. `seaborn` builds on top of `matplotlib` by doing 2 things:1. Gives us access to more types of plots (Note: Every plot created in `seaborn` could be made by `matplotlib`, but you shouldn't have to worry about doing this)2. Sets better defaults for how the plot looks right awayBefore we start, make sure that you have `seaborn` installed. If not, then you can install it by```conda install seaborn```_This notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course [Data Bootcamp](http://databootcamp.nyuecon.com/)._
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import sys
%matplotlib inline
###Output
_____no_output_____
###Markdown
As per usual, we begin by listing the versions of each package that is used in this notebook.
###Code
# check versions (overkill, but why not?)
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Matplotlib version: ', mpl.__version__)
print('Seaborn version: ', sns.__version__)
###Output
Python version: 3.5.1 |Anaconda 4.0.0 (64-bit)| (default, Feb 16 2016, 09:49:46) [MSC v.1900 64 bit (AMD64)]
Pandas version: 0.18.0
Matplotlib version: 1.5.1
Seaborn version: 0.7.0
###Markdown
DatasetsThere are some classical datasets that get used to demonstrate different types of plots. We will use several of them here.* tips : This dataset has informaiton on waiter tips. Includes information such as total amount of the bill, tip amount, sex of waiter, what day of the week, which meal, and party size.* anscombe: This dataset is a contrived example. It has 4 examples which differ drastically when you look at them, but they have the same correlation, regression coefficient, and $R^2$.* titanic : This dataset has information on each of the passengers who were on the titanic. Includes information such as: sex, age, ticket class, fare paid, whether they were alone, and more.
###Code
tips = sns.load_dataset("tips")
ansc = sns.load_dataset("anscombe")
tita = sns.load_dataset("titanic")
###Output
_____no_output_____
###Markdown
Better DefaultsRecall that in our [previous notebook](bootcamp_graphics.ipynb) that we used `plt.style.use` to set styles. We will begin by setting the style to `"classic"`; this sets all of our default settings back to `matplotlib`'s default values.Below we plot open and closing prices on the top axis and the implied returns on the bottom axis.
###Code
plt.style.use("classic")
def plot_tips():
fig, ax = plt.subplots(2, figsize=(8, 6))
tips[tips["sex"] == "Male"].plot(x="total_bill", y="tip", ax=ax[0], kind="scatter",
color="blue")
tips[tips["sex"] == "Female"].plot(x="total_bill", y="tip", ax=ax[1], kind="scatter",
color="#F52887")
ax[0].set_xlim(0, 60)
ax[1].set_xlim(0, 60)
ax[0].set_ylim(0, 15)
ax[1].set_ylim(0, 15)
ax[0].set_title("Male Tips")
ax[1].set_title("Female Tips")
fig.tight_layout()
plot_tips()
# fig.savefig("/home/chase/Desktop/foo.png")
# sns.set() resets default seaborn settings
sns.set()
plot_tips()
###Output
_____no_output_____
###Markdown
What did you notice about the differences in the settings of the plot?Which do you like better? We like the second better.Investigate other styles and create the same plot as above using a style you like. You can choose from the list in the code below.If you have additional time, visit the [seaborn docs](http://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.htmltemporarily-setting-figure-style) and try changing other default settings.
###Code
plt.style.available
###Output
_____no_output_____
###Markdown
We could do the same for a different style (like `ggplot`)
###Code
sns.axes_style?
sns.set()
plot_tips()
###Output
_____no_output_____
###Markdown
**Exercise**: Find a style you like and recreate the plot above using that style. The Juicy StuffWhile having `seaborn` set sensible defaults is convenient, it isn't a particularly large innovation. We could choose sensible defaults and set them to be our default. The main benefit of `seaborn` is the types of graphs that it gives you access to -- All of which could be done in `matplotlib`, but, instead of 5 lines of code, it would require possibly hundreds of lines of code. Trust us... This is a good thing.We don't have time to cover everything that can be done in `seaborn`, but we suggest having a look at the [gallery](http://stanford.edu/~mwaskom/software/seaborn/examples/index.html) of examples.We will cover:* `kdeplot`* `jointplot`* `violinplot`* `pairplot`* ...
###Code
# Move back to seaborn defaults
sns.set()
###Output
_____no_output_____
###Markdown
kdeplotWhat does kde stand for?kde stands for "kernel density estimation." This is (far far far) beyond the scope of this class, but the basic idea is that this is a smoothed histogram. When we are trying to get information about distributions it sometimes looks nicer than a histogram does.
###Code
fig, ax = plt.subplots()
ax.hist(tips["tip"], bins=25)
ax.set_title("Histogram of tips")
plt.show()
fig, ax = plt.subplots()
sns.kdeplot(tips["tip"], ax=ax)
ax.hist(tips["tip"], bins=25, alpha=0.25, normed=True, label="tip")
ax.legend()
fig.suptitle("Kernel Density with Histogram");
###Output
_____no_output_____
###Markdown
**Exercise**: Create your own kernel density plot using `sns.kdeplot` of `"total_bill"` from the `tips` dataframe
###Code
fig, ax = plt.subplots()
sns.kdeplot(tips.total_bill, ax=ax)
sns.kdeplot(tips.tip, ax=ax)
# ax.hist(tips.total_bill, alpha=0.3, normed=True)
###Output
_____no_output_____
###Markdown
JointplotWe now show what `jointplot` does. It draws a scatter plot of two variables and puts their histogram just outside of the scatter plot. This tells you information about not only the joint distribution, but also the marginals.
###Code
sns.jointplot(x="total_bill", y="tip", data=tips)
###Output
_____no_output_____
###Markdown
We can also plot everything as a kernel density estimate -- Notice the main plot is now a contour map.
###Code
sns.jointplot(x="total_bill", y="tip", data=tips, kind="kde")
###Output
_____no_output_____
###Markdown
Like an contour map **Exercise**: Create your own `jointplot`. Feel free to choose your own x and y data (if you can't decide then use `x=size` and `y=tip`). Interpret the output of the plot.
###Code
sns.jointplot(x="size", y="tip", data=tips,
kind="kde")
sns.jointplot(x="size", y="tip", data=tips)
###Output
_____no_output_____
###Markdown
violinplotSome of the story of this notebook is that distributions matter and how we can show them. Violin plots are similar to a sideways kernel density and it allows us to look at how distributions matter over some aspect of the data.
###Code
tita.head()
tips.head()
sns.violinplot?
sns.violinplot(x="fare", y="deck", hue="survived", split=True, data=tita)
# sns.swarmplot(x="class", y="age", hue="sex", data=tita)
sns.boxplot(x="class", y="age", hue="sex", data=tita)
###Output
_____no_output_____
###Markdown
**Exercise**: We might also want to look at the distribution of prices across ticket classes. Make a violin plot of the prices over the different ticket classes.
###Code
sns.swarmplot(x="sex", y="age", hue="survived", data=tita)
###Output
_____no_output_____
###Markdown
Pairplot Pair plots show us two things. They show us the histograms of the variables along the diagonal and then the scatter plot of each pair of variables on the off diagonal pictures.Why might this be useful? It allows us to look get an idea of the correlations across each pair of variables and gives us an idea of their relationships across the variables.
###Code
sns.pairplot(tips[["tip", "total_bill", "size"]], size=2.5)
###Output
_____no_output_____
###Markdown
Below is the same plot, but slightly different. What is different?
###Code
sns.pairplot(tips[["tip", "total_bill", "size"]], size=2.5, diag_kind="kde")
###Output
_____no_output_____
###Markdown
What's different about this plot?Different colors for each company.
###Code
tips.head()
sns.pairplot(tips[["tip", "total_bill", "size", "time"]], size=3.5,
hue="time", diag_kind="kde")
###Output
_____no_output_____
###Markdown
lmplotWe often want to think about running regressions of variables. A statistician named Francis Anscombe came up with four datasets that:* Same mean for $x$ and $y$* Same variance for $x$ and $y$* Same correlation between $x$ and $y$* Same regression coefficient of $x$ on $y$Below we show the scatter plot of the datasets to give you an idea of how different they are.
###Code
fig, ax = plt.subplots(2, 2, figsize=(8, 6))
ansc[ansc["dataset"] == "I"].plot.scatter(x="x", y="y", ax=ax[0, 0])
ansc[ansc["dataset"] == "II"].plot.scatter(x="x", y="y", ax=ax[0, 1])
ansc[ansc["dataset"] == "III"].plot.scatter(x="x", y="y", ax=ax[1, 0])
ansc[ansc["dataset"] == "IV"].plot.scatter(x="x", y="y", ax=ax[1, 1])
ax[0, 0].set_title("Dataset I")
ax[0, 1].set_title("Dataset II")
ax[1, 0].set_title("Dataset III")
ax[1, 1].set_title("Dataset IV")
fig.suptitle("Anscombe's Quartet")
###Output
_____no_output_____
###Markdown
`lmplot` plots the data with the regression coefficient through it.
###Code
sns.lmplot(x="x", y="y", data=ansc, col="dataset", hue="dataset",
col_wrap=2, ci=None)
sns.lmplot(x="total_bill", y="tip", data=tips, hue="sex")
###Output
_____no_output_____
###Markdown
regplot `regplot` also shows the regression line through data points
###Code
sns.regplot(x="total_bill", y="tip", data=tips)
###Output
_____no_output_____ |
PyTorch/NoteBooks/FL/Provisioning.ipynb | ###Markdown
Provisioning Federated learning (FL) Tool FL has been simplified in V3.1 to have a provisioning tool that allows admins to:- Configure FL experiment- Send startup packages to FL clients (password protected zip file)By the end of this notebook you would be able to provision an FL experiment and start the server. Prerequisites- Running this notebook from within clara docker following setup in [readMe.md](../../readMe.md)- Provisioning doesn't require GPUs. ResourcesWe encourage you to watch the free GTC 2021 talks covering Clara Train SDK- [Clara Train 4.0 - 201 Federated Learning [SE3208]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20201%20Federated%20Learning%20%5BSE3208%5D/1_m48t6b3y)- [Federated Learning for Medical AI [S32530]](https://gtc21.event.nvidia.com/media/Federated%20Learning%20for%20Medical%20AI%20%5BS32530%5D/1_z26u15uk) DataSet No dataset is used in this notebook Lets get startedCell below defines functions that we will use throughout the notebook
###Code
def listDirs(newMMARDir):
!ls $newMMARDir
!echo ----config
!ls $newMMARDir/config
!echo ----commands
!ls $newMMARDir/commands
def printFile(filePath,lnSt,lnOffset):
print ("showing ",str(lnOffset)," lines from file ",filePath, "starting at line",str(lnSt))
lnOffset=lnSt+lnOffset
!< $filePath head -n "$lnOffset" | tail -n +"$lnSt"
###Output
_____no_output_____
###Markdown
Provisioning GoalThe goal of provisioning is to easily define your FL experiments so you can give each site a simple package to run as shown below Provisioning ComponentsThe provisioning tool is the first step to configure a FL experiment. This consists of creating: 1. Project yaml file, which defines: project name, participants, server name and other settings2. Authorization json file which defines: groups, roles, rights. 3. Run provisioning tool 1. UI tool to generate project.yaml and authorization.jsonWe have developed a simple html page that would generate the project.yaml and authorization.json files for you.Simply open the html or run cell below to see the page. You would need to:- Change the servername.- Add/remove groups.- Add/remove polices- Add/remove users - Click `Generate artifacts`- Click download or copy / past the files as new yaml and json files
###Code
import IPython
IPython.display.IFrame('./FLprovUI.html',width=850,height=700)
###Output
_____no_output_____
###Markdown
2 Run Provisioning tool For simplicity we have included a project1.yaml and project1auth.json files for you to use in this notebook.In order to see their content simply run cell below
###Code
MMAR_ROOT="/claraDevDay/FL/"
PROV_DIR="provisioning"
PROJ_NAME="project1"
printFile(MMAR_ROOT+PROJ_NAME+".yml",0,50)
print("---------------------")
printFile(MMAR_ROOT+PROJ_NAME+"auth.json",0,200)
###Output
_____no_output_____
###Markdown
2.1 Run provisioning toolCell below show help on how to use the cli for the provisioning tool
###Code
!provision -h
%cd $MMAR_ROOT
!rm -r $PROJ_NAME
%mkdir -p $PROJ_NAME/$PROV_DIR
PROJ_PATH=MMAR_ROOT+PROJ_NAME+"/"
PROV_PATH=PROJ_PATH+PROV_DIR+"/"
%cd $PROJ_PATH
!provision -p $MMAR_ROOT/$PROJ_NAME'.yml' -o $PROV_DIR -t $PROV_PATH/audit.pkl -a $MMAR_ROOT/$PROJ_NAME'auth.json'
###Output
_____no_output_____
###Markdown
3. Send startup kits to participantsIn a real experiment, you would send packages to each site so they would run it on their system. Here we would extract and simulate a server, 3 clients and an admin all in this tutorial. Cell above should have printed out passwords for each package. You should replace the password from above cell to the corresponding file in cell below
###Code
%cd $PROV_PATH
server,client1,client2,client3,client4="server","client1","client2","client3","client4"
admin,leadIT,siteResearch,leadITSec="admin","leadIT","siteResearch","leadITSec"
!unzip -oP Gt70p3kYKoIVfM48 server.zip -d ../$server
!unzip -oP E9HCjgF6VBMoALrU client1.zip -d ../$client1
!unzip -oP mXoq4RdhItNuDvPe client2.zip -d ../$client2
!unzip -oP E9HCjgF6VBMoALrU client3.zip -d ../$client3
!unzip -oP E9HCjgF6VBMoALrU client4.zip -d ../$client4
!unzip -oP ecpUmT10J0WDhsKu [email protected] -d ../$admin
!unzip -oP ecpUmT10J0WDhsKu [email protected] -d ../$leadIT
!unzip -oP ecpUmT10J0WDhsKu [email protected] -d ../$siteResearch
!unzip -oP ecpUmT10J0WDhsKu [email protected] -d ../$leadITSec
###Output
_____no_output_____ |
_notebooks/2021-04-30-python_variables_and_data_types.ipynb | ###Markdown
A Quick Tour of Variables and Data Types in Python> This tutorial is the second in a series on introduction to programming using the Python language. These tutorials take a practical coding-based approach, and the best way to learn the material is to execute the code and experiment with the examples. - toc: true- badges: true- comments: true- categories: [python, data-types]- image: images/Python-Data-structures.png- author: Victor Omondi Part 2 of "A Gentle Introduction to Programming with Python" Storing information using variablesComputers are useful for two purposes: storing information and performing operations on stored information. While working with a programming language such as Python, informations is stored in *variables*. You can think of variables are containers for storing data. The data stored within a variable is called it's *value*. It's really easy to create variables in Python.
###Code
my_favorite_color = "black"
my_favorite_color
###Output
_____no_output_____
###Markdown
A variable is created using an *assignment statement*, which begins with the variable's name, followed by the assignment operator `=` (different from the equality comparision operator `==`), followed by the value to be stored within the variable. You can also values to multiple variables in a single statement by separating the variable names and values with commas.
###Code
color1, color2, color3 = "red", "green", "blue"
color1
color2
color3
###Output
_____no_output_____
###Markdown
You can assign the same value to multiple variables by chaining multiple assignment operations within a single statement.
###Code
color4 = color5 = color6 = "magenta"
color4
color5
color6
###Output
_____no_output_____
###Markdown
You can change the value stored within a variable simply by assigning a new value to it using another assignment statement. Be careful while reassgining variables: when you assign a new value to the variable, the old value is lost and no longer accessible.
###Code
my_favorite_color = "red"
my_favorite_color
###Output
_____no_output_____
###Markdown
While assigning a new value to a variable, you can also use the previous value of the variable to determine the new value.
###Code
counter = 10
counter = counter + 1
counter
###Output
_____no_output_____
###Markdown
The pattern `var = var op something` (where `op` is an arithmetic operator like `+`, `-`, `*`, `/`) is very commonly used, so Python provides a *shorthand* syntax for it.
###Code
counter = 10
# Same as `counter = counter + 4`
counter += 4
counter
###Output
_____no_output_____
###Markdown
Variable names can be short (`a`, `x`, `y` etc.) or descriptive ( `my_favorite_color`, `profit_margin`, `the_3_musketeers` etc.). Howerver, you must follow these rules while naming Python variables:* A variable's name must start with a letter or the underscore character `_`. It cannot start with a number.* A variable name can only contain lowercase or uppercase letters, digits or underscores (`a`-`z`, `A`-`Z`, `0`-`9` and `_`).* Variable names are case-sensitive i.e. a_variable, A_Variable and A_VARIABLE are all different variables.Here are some valid variable names:
###Code
a_variable = 23
is_today_Saturday = False
my_favorite_car = "Delorean"
the_3_musketeers = ["Athos", "Porthos", "Aramis"]
###Output
_____no_output_____
###Markdown
Let's also try creating some variables with invalid names. Python prints a syntax error if your variable's name is invalid.> **Syntax**: The syntax of a programming language refers to the rules which govern what a valid instruction or *statement* in the language should look like. If a statement does not follow these rules, Python stops execution and informs you that there is a *syntax error*. You can think of syntax as the rules of grammar for a programming language.
###Code
a variable = 23
is_today_$aturday = False
my-favorite-car = "Delorean"
3_musketeers = ["Athos", "Porthos", "Aramis"]
###Output
_____no_output_____
###Markdown
Built-in data types in PythonAny data or information stored within a Python variable has a *type*. The type of data stored within a variable can be checked using the `type` function.
###Code
a_variable
type(a_variable)
is_today_Saturday
type(is_today_Saturday)
my_favorite_car
type(my_favorite_car)
the_3_musketeers
type(the_3_musketeers)
###Output
_____no_output_____
###Markdown
Python has several built-in data types for storing different types of information in variables. Following are at some commonly used data types:1. Integer2. Float3. Boolean4. None5. String6. List7. Tuple8. DictionaryInteger, float, boolean, None and string are *primitive data types* because they represent a single value. Other data types like list, tuple and dictionary are often called *data structures* or *containers* because they hold multiple pieces of data together. IntegerIntegers represent positive or negative whole numbers, from negative infinity to infinity. Note that integers should not include decimal points. Integers have the type `int`.
###Code
current_year = 2021
current_year
type(current_year)
###Output
_____no_output_____
###Markdown
Unlike some other programming languages, integers in Python can be arbirarily large (or small). There's no lowest or highest value for integers, and there's just one `int` type (as opposed to `short`, `int`, `long`, `long long`, `unsigned int` etc. in C/C++/Java).
###Code
a_large_negative_number = -23374038374832934334234317348343
a_large_negative_number
type(a_large_negative_number)
###Output
_____no_output_____
###Markdown
FloatFloats (or floating point numbers) are numbers with a decimal point. There are no limits on the value of a float or the number of digits before or after the decimal point. Floating point numbers have the type `float`.
###Code
pi = 3.141592653589793238
pi
type(pi)
###Output
_____no_output_____
###Markdown
Note that a whole number is treated as a float if it is written with a decimal point, even though the decimal portion of the number is zero.
###Code
a_number = 3.0
a_number
type(a_number)
another_number = 4.
another_number
type(another_number)
###Output
_____no_output_____
###Markdown
Floating point numbers can also be written using the scientific notation with an "e" to indicate the power of 10.
###Code
one_hundredth = 1e-2
one_hundredth
type(one_hundredth)
avogadro_number = 6.02214076e23
avogadro_number
type(avogadro_number)
###Output
_____no_output_____
###Markdown
Floats can be converted into integers and vice versa using the `float` and `int` functions. The operation of coverting one type of value into another is called casting.
###Code
float(current_year)
float(a_large_negative_number)
int(pi)
int(avogadro_number)
###Output
_____no_output_____
###Markdown
While performing arithmetic operations, integers are automatically converted to floats if any of the operands is a float. Also, the division operator `/` always returns a float, even if both operands are integers. Use the `//` operator if you want the result of division to be an `int`.
###Code
type(45 * 3.0)
type(45 * 3)
type(10/3)
type(10/2)
type(10//2)
###Output
_____no_output_____
###Markdown
BooleanBooleans represent one of 2 values: `True` and `False`. Booleans have the type `bool`.
###Code
is_today_Sunday = True
is_today_Sunday
type(is_today_Saturday)
###Output
_____no_output_____
###Markdown
Booleans are generally returned as the result of a comparision operation (e.g. `==`, `>=` etc.).
###Code
cost_of_ice_bag = 1.25
is_ice_bag_expensive = cost_of_ice_bag >= 10
is_ice_bag_expensive
type(is_ice_bag_expensive)
###Output
_____no_output_____
###Markdown
Booleans are automatically converted to `int`s when used in arithmetic operations. `True` is converted to `1` and `False` is converted to `0`.
###Code
5 + False
3. + True
###Output
_____no_output_____
###Markdown
Any value in Python can be converted to a Boolean using the `bool` function. Only the following values evaluate to `False` (they are often called *falsy* values):1. The value `False` itself2. The integer `0`3. The float `0.0`4. The empty value `None`5. The empty text `""`6. The empty list `[]`7. The empty tuple `()`8. The empty dictionary `{}`9. The emtpy set `set()`10. The empty range `range(0)`Everything else evaluates to `True` (a value that evalutes to `True` is often called a *truthy* value).
###Code
bool(False)
bool(0)
bool(0.0)
bool(None)
bool("")
bool([])
bool(())
bool({})
bool(set())
bool(range(0))
bool(True), bool(1), bool(2.0), bool("hello"), bool([1,2]), bool((2,3)), bool(range(10))
###Output
_____no_output_____
###Markdown
NoneThe None type includes a single value `None`, used to indicate the absence of a value. `None` has the type `NoneType`. It is often used to declare a variable whose value may be assigned later.
###Code
nothing = None
type(nothing)
###Output
_____no_output_____
###Markdown
StringA string is used to represent text (*a string of characters*) in Python. Strings must be surrounded using quotations (either the single quote `'` or the double quote `"`). Strings have the type `string`.
###Code
today = "Friday"
today
type(today)
###Output
_____no_output_____
###Markdown
You can use single quotes inside a string written with double quotes, and vice versa.
###Code
my_favorite_movie = "One Flew over the Cuckoo's Nest"
my_favorite_movie
my_favorite_pun = 'Thanks for explaining the word "many" to me, it means a lot.'
my_favorite_pun
###Output
_____no_output_____
###Markdown
To use the a double quote within a string written with double quotes, *escape* the inner quotes by prefixing them with the `\` character.
###Code
another_pun = "The first time I got a universal remote control, I thought to myself \"This changes everything\"."
another_pun
###Output
_____no_output_____
###Markdown
Strings created using single or double quotes must begin and end on the same line. To create multiline strings, use three single quotes `'''` or three double quotes `"""` to begin and end the string. Line breaks are represented using the newline character `\n`.
###Code
yet_another_pun = '''Son: "Dad, can you tell me what a solar eclipse is?"
Dad: "No sun."'''
yet_another_pun
###Output
_____no_output_____
###Markdown
Multiline strings are best displayed using the `print` function.
###Code
print(yet_another_pun)
a_music_pun = """
Two windmills are standing in a field and one asks the other,
"What kind of music do you like?"
The other says,
"I'm a big metal fan."
"""
print(a_music_pun)
###Output
Two windmills are standing in a field and one asks the other,
"What kind of music do you like?"
The other says,
"I'm a big metal fan."
###Markdown
You can check the length of a string using the `len` function.
###Code
len(my_favorite_movie)
###Output
_____no_output_____
###Markdown
Note that special characters like `\n` and escaped characters like `\"` count as a single character, even though they are written and sometimes printed as 2 characters.
###Code
multiline_string = """a
b"""
multiline_string
len(multiline_string)
###Output
_____no_output_____
###Markdown
A string can be converted into a list of characters using `list` function.
###Code
list(multiline_string)
###Output
_____no_output_____
###Markdown
Strings also support several list operations, which are discussed in the next section. We'll look at a couple of examples here.You can access individual characters within a string using the `[]` indexing notation. Note the character indices go from `0` to `n-1`, where `n` is the length of the string.
###Code
today = "Saturday"
today[0]
today[3]
today[7]
###Output
_____no_output_____
###Markdown
You can access a part of a string using by providing a `start:end` range instead of a single index in `[]`.
###Code
today[5:8]
###Output
_____no_output_____
###Markdown
You can also check whether a string contains a some text using the `in` operator.
###Code
'day' in today
'Sun' in today
###Output
_____no_output_____
###Markdown
Two or more strings can be joined or *concatenated* using the `+` operator. Be careful while concatenating strings, sometimes you may need to add a space character `" "` between words.
###Code
full_name = "Derek O'Brien"
greeting = "Hello"
greeting + full_name
greeting + " " + full_name + "!" # additional space
###Output
_____no_output_____
###Markdown
String in Python have many built-in *methods* that can be used to manipulate them. Let's try out some common string methods.> **Methods**: Methods are functions associated with data types, and are accessed using the `.` notatation e.g. `variable_name.method()` or `"a string".method()`. Methods are a powerful technique for associating common operations with values of specific data types.The `.lower()`, `.upper()` and `.capitalize()` methods are used to change the case of the characters.
###Code
today.lower()
"saturday".upper()
"monday".capitalize() # changes first character to uppercase
###Output
_____no_output_____
###Markdown
The `.replace` method is used to replace a part of the string with another string. It takes the portion to be replaced and the replacement text as *inputs* or *arguments*.
###Code
another_day = today.replace("Satur", "Wednes")
another_day
###Output
_____no_output_____
###Markdown
Note that a new string is returned, and the original string is not modified.
###Code
today
###Output
_____no_output_____
###Markdown
The `.split` method can be used to split a string into a list of strings based using the character(s) provided.
###Code
"Sun,Mon,Tue,Wed,Thu,Fri,Sat".split(",")
###Output
_____no_output_____
###Markdown
The .strip method is used to remove whitespace characters from the beginning and end of a string.
###Code
a_long_line = " This is a long line with some space before, after, and some space in the middle.. "
a_long_line_stripped = a_long_line.strip()
a_long_line_stripped
###Output
_____no_output_____
###Markdown
The `.format` method is used to combine values of other data types e.g. integers, floats, booleans, lists etc. with strings. It is often used to create output messages for display.
###Code
# Input variables
cost_of_ice_bag = 1.25
profit_margin = .2
number_of_bags = 500
# Template for output message
output_template = """If a grocery store sells ice bags at $ {} per bag, with a profit margin of {} %,
then the total profit it makes by selling {} ice bags is $ {}."""
print(output_template)
# Inserting values into the string
total_profit = cost_of_ice_bag * profit_margin * number_of_bags
output_message = output_template.format(cost_of_ice_bag, profit_margin*100, number_of_bags, total_profit)
print(output_message)
###Output
If a grocery store sells ice bags at $ 1.25 per bag, with a profit margin of 20.0 %,
then the total profit it makes by selling 500 ice bags is $ 125.0.
###Markdown
Notice how the placeholders `{}` in the `output_template` string are replaced with the arguments provided to the `.format` method.It is also possible use the string concatenation operator `+` to combine strings with other values, however, those values must first be converted to strings using the `str` function.
###Code
"If a grocery store sells ice bags at $ " + cost_of_ice_bag + ", with a profit margin of " + profit_margin
"If a grocery store sells ice bags at $ " + str(cost_of_ice_bag) + ", with a profit margin of " + str(profit_margin)
###Output
_____no_output_____
###Markdown
In fact, the `str` can be used to convert a value of any data type into a string.
###Code
str(23)
str(23.432)
str(True)
the_3_musketeers = ["Athos", "Porthos", "Aramis"]
str(the_3_musketeers)
###Output
_____no_output_____
###Markdown
Note that all string methods returns new values, and DO NOT change the existing string. You can find a full list of string methods here: https://www.w3schools.com/python/python_ref_string.asp. Strings also support the comparision operators `==` and `!=` for checking whether two strings are equal
###Code
first_name = "John"
first_name == "Doe"
first_name == "John"
first_name != "Jane"
###Output
_____no_output_____
###Markdown
We've looked at the primitive data types in Python, and we're now ready to explore non-primitive data structures or containers. ListA list in Python is an ordered collection of values. Lists can hold values of different data types, and support operations to add, remove and change values. Lists have the type `list`.To create a list, enclose a list of values within square brackets `[` and `]`, separated by commas.
###Code
fruits = ['apple', 'banana', 'cherry']
fruits
type(fruits)
###Output
_____no_output_____
###Markdown
Let's try creating a list containing values of different data types, including another list.
###Code
a_list = [23, 'hello', None, 3.14, fruits, 3 <= 5]
a_list
empty_list = []
empty_list
###Output
_____no_output_____
###Markdown
To determine the number of values in a list, use the `len` function. In general, the `len` function can be used to determine of values in several other data types.
###Code
len(fruits)
print("Number of fruits:", len(fruits))
len(a_list)
len(empty_list)
###Output
_____no_output_____
###Markdown
You can access the elements of a list using the the *index* of the element, starting from the index 0.
###Code
fruits[0]
fruits[1]
fruits[2]
###Output
_____no_output_____
###Markdown
If you try to access an index equal to or higher than the length of the list, Python returns an `IndexError`.
###Code
fruits[3]
fruits[4]
fruits[-1]
fruits[-2]
fruits[-3]
fruits[-4]
###Output
_____no_output_____
###Markdown
You can also access a range of values from the list. The result is itself a list. Let us look at some examples.
###Code
a_list = [23, 'hello', None, 3.14, fruits, 3 <= 5]
a_list
len(a_list)
a_list[2:5]
###Output
_____no_output_____
###Markdown
Note that the start index (`2` in the above example) of the range is included in the list, but the end index (`5` in the above example) is not included. So, the result has 3 values (indices `2`, `3` and `4`).Here are some experiments you should try out (use the empty cells below):* Try setting one or both indices of the range are larger than the size of the list e.g. `a_list[2:10]`* Try setting the start index of the range to be larger than the end index of the range e.g. `list_a[2:10]`* Try leaving out the start or end index of a range e.g. `a_list[2:]` or `a_list[:5]`* Try using negative indices for the range e.g. `a_list[-2:-5]` or `a_list[-5:-2]` (can you explain the results?)> The flexible and interactive nature of Jupyter notebooks makes them a great tool for learning and experimentation. Most questions that arise while you are learning Python for the first time can be resolved by simply typing the code into a cell and executing it. Let your curiosity run wild, and discover what Python is capable of, and what it isn't!
###Code
###Output
_____no_output_____
###Markdown
You can also change the value at a specific index within a list using the assignment operation.
###Code
fruits
fruits[1] = 'blueberry'
fruits
###Output
_____no_output_____
###Markdown
A new value can be added to the end of a list using the `append` method.
###Code
fruits.append('dates')
fruits
###Output
_____no_output_____
###Markdown
A new value can also be inserted a specific index using the `insert` method.
###Code
fruits.insert(1, 'banana')
fruits
###Output
_____no_output_____
###Markdown
You can remove a value from the list using the `remove` method.
###Code
fruits.remove('blueberry')
fruits
###Output
_____no_output_____
###Markdown
What happens if a list has multiple instances of the value passed to `.remove`? Try it out.
###Code
###Output
_____no_output_____
###Markdown
To remove an element from a specific index, use the `pop` method. The method also returns the removed element.
###Code
fruits
fruits.pop(1)
fruits
###Output
_____no_output_____
###Markdown
If no index is provided, the `pop` method removes the last element of the list.
###Code
fruits.pop()
fruits
###Output
_____no_output_____
###Markdown
You can test whether a list contains a value using the `in` operator.
###Code
'pineapple' in fruits
'cherry' in fruits
###Output
_____no_output_____
###Markdown
To combine two or more lists, use the `+` operator. This operation is also called *concatenation*.
###Code
fruits
more_fruits = fruits + ['pineapple', 'tomato', 'guava'] + ['dates', 'banana']
more_fruits
###Output
_____no_output_____
###Markdown
To create a copy of a list, use the `copy` method. Modifying the copied list does not affect the original list.
###Code
more_fruits_copy = more_fruits.copy()
more_fruits_copy
# Modify the copy
more_fruits_copy.remove('pineapple')
more_fruits_copy.pop()
more_fruits_copy
# Original list remains unchanged
more_fruits
###Output
_____no_output_____
###Markdown
Note that you cannot create a copy of a list by simply creating a new variable using the assignment operator `=`. The new variable will point to the same list, and any modifications performed using one variable will affect the other.
###Code
more_fruits
more_fruits_not_a_copy = more_fruits
more_fruits_not_a_copy.remove('pineapple')
more_fruits_not_a_copy.pop()
more_fruits_not_a_copy
more_fruits
###Output
_____no_output_____
###Markdown
Just like strings, there are several in-built methods to manipulate a list. Unlike strings, however, most list methods modify the original list, rather than returning a new one. Check out some common list operations here: https://www.w3schools.com/python/python_ref_list.aspFollowing are some exercises you can try out with list methods (use the blank code cells below):* Reverse the order of elements in a list* Add the elements of one list to the end of another list* Sort a list of strings in alphabetical order* Sort a list of numbers in decreasing order
###Code
###Output
_____no_output_____
###Markdown
TupleA tuple is an ordered collection of values, similar to a list, however it is not possible to add, remove or modify values in a tuple. A tuple is created by enclosing values within parantheses `(` and `)`, separated by commas.> Any data structure that cannot be modified after creation is called *immutable*. You can think of tuples as immutable lists.Let's try some experiments with tuples.
###Code
fruits = ('apple', 'cherry', 'dates')
# check no. of elements
len(fruits)
# get an element (positive index)
fruits[0]
# get an element (negative index)
fruits[-2]
# check if it contains an element
'dates' in fruits
# try to change an element
fruits[0] = 'avocado'
# try to append an element
fruits.append('blueberry')
# try to remove an element
fruits.remove('apple')
###Output
_____no_output_____
###Markdown
You can also skip the parantheses `(` and `)` while creating a tuple. Python automatically converts comma-separated values into a tuple.
###Code
the_3_musketeers = 'Athos', 'Porthos', 'Aramis'
the_3_musketeers
###Output
_____no_output_____
###Markdown
You can also create a tuple with just one element, if you include a comma after the element. Just wrapping it with parantheses `(` and `)` won't create a tuple.
###Code
single_element_tuple = 4,
single_element_tuple
another_single_element_tuple = (4,)
another_single_element_tuple
not_a_tuple = (4)
not_a_tuple
###Output
_____no_output_____
###Markdown
Tuples are often used to create multiple variables with a single statement.
###Code
point = (3, 4)
point_x, point_y = point
point_x
point_y
###Output
_____no_output_____
###Markdown
You can convert a list into a tuple using the `tuple` function, and vice versa using the `list` function
###Code
tuple(['one', 'two', 'three'])
list(('Athos', 'Porthos', 'Aramis'))
###Output
_____no_output_____
###Markdown
Tuples have just 2 built-in methods: `count` and `index`. Can you figure out what they do? While look could look for documentation and examples online, there's an easier way to check the documentation of a method, using the `help` function.
###Code
a_tuple = 23, "hello", False, None, 23, 37, "hello"
help(a_tuple.count)
###Output
Help on built-in function count:
count(value, /) method of builtins.tuple instance
Return number of occurrences of value.
###Markdown
Within a Jupyter notebook, you can also start a code cell with `?` and type the name of a function or method. When you execute this cell, you will see the documentation for the function/method in a pop-up window.
###Code
?a_tuple.index
###Output
_____no_output_____
###Markdown
Try using `count` and `index` with `a_tuple` in the code cells below.
###Code
###Output
_____no_output_____
###Markdown
DictionaryA dictionary is an unordered collection of items. Each item stored in a dictionary has a key and value. Keys are used to retrieve values from the dictionary. Dictionaries have the type `dict`.Dictionaries are often used to store many pieces of information e.g. details about a person, in a single variable. Dictionaries are created by enclosing key-value pairs within curly brackets `{` and `}`.
###Code
person1 = {
'name': 'John Doe',
'sex': 'Male',
'age': 32,
'married': True
}
person1
###Output
_____no_output_____
###Markdown
Dictionaries can also be created using the `dict` function.
###Code
person2 = dict(name='Jane Judy', sex='Female', age=28, married=False)
person2
type(person1)
###Output
_____no_output_____
###Markdown
Keys can be used to access values using square brackets `[` and `]`.
###Code
person1['name']
person1['married']
person2['name']
###Output
_____no_output_____
###Markdown
If a key isn't present in the dictionary, then a `KeyError` is returned.
###Code
person1['address']
###Output
_____no_output_____
###Markdown
The `get` method can also be used to access the value associated with a key.
###Code
person2.get("name")
###Output
_____no_output_____
###Markdown
The `get` method also accepts a default value which is returned if the key is not present in the dictionary.
###Code
person2.get("address", "Unknown")
###Output
_____no_output_____
###Markdown
You can check whether a key is present in a dictionary using the `in` operator.
###Code
'name' in person1
'address' in person1
###Output
_____no_output_____
###Markdown
You can change the value associated with a key using the assignment operator.
###Code
person2['married']
person2['married'] = True
person2['married']
###Output
_____no_output_____
###Markdown
The assignment operator can also be used to add new key-value pairs to the dictonary.
###Code
person1
person1['address'] = '1, Penny Lane'
person1
###Output
_____no_output_____
###Markdown
To remove a key and the associated value from a dictionary, use the `pop` method.
###Code
person1.pop('address')
person1
###Output
_____no_output_____
###Markdown
Dictonaries also provide methods to view the list of keys, values or key-value pairs inside it.
###Code
person1.keys()
person1.values()
person1.items()
person1.items()[1]
###Output
_____no_output_____
###Markdown
The result of the `keys`, `values` or `items` look like lists but don't seem to support the indexing operator `[]` for retrieving elements. Can you figure out how to access an element at a specific index from these results? Try it below. *Hint: Use the `list` function*
###Code
###Output
_____no_output_____
###Markdown
Dictionaries provide many other methods. You can learn more about them here: https://www.w3schools.com/python/python_ref_dictionary.aspHere are some experiments you can try out with dictionaries (use the empty cells below):* What happens if you use the same key multiple times while creating a dictionary?* How can you create a copy of a dictionary (modifying the copy should not change the original)?* Can the value associated with a key itself be a dictionary?* How can you add the key value pairs from one dictionary into another dictionary? Hint: See the `update` method.* Can the keys of a dictionary be something other than a string e.g. a number, boolean, list etc.?
###Code
###Output
_____no_output_____ |
module_4/Data analysis. Project 4. Flights.ipynb | ###Markdown
Данные о цене топлива по месяцам взяты на сайте [ФЕДЕРАЛЬНОЕ АГЕНТСТВО ВОЗДУШНОГО ТРАНСПОРТА](https://favt.gov.ru/dejatelnost-ajeroporty-i-ajerodromy-ceny-na-aviagsm/?id=7329).
###Code
fuel_kg_per_hour = {
'Boeing 737-300': 2400,
'Sukhoi Superjet-100': 1700
}
fuel_price_per_month_Anapa = {
1: 41.435,
2: 39.553,
12: 47.101
}
# НДС - 18% в 2017 году
df['fuel_kg_per_hour'] = df.model.map(fuel_kg_per_hour)
df['fuel_spent'] = df.flight_duration * df.fuel_kg_per_hour
df['fuel_price_per_month_Anapa'] = df.actual_arrival.dt.month.map(fuel_price_per_month_Anapa)
df['fuel_cost'] = df.fuel_spent * (df.fuel_price_per_month_Anapa * 1.18)
###Output
_____no_output_____
###Markdown
Составим сравнительную таблицу по перелетам в двух направлениях.
###Code
grouped_by_city = data_for_analysis.groupby(['city', 'model'], as_index=False).agg(
{
'flight_id': 'count',
'flight_duration': 'mean',
'seats_count_mean': 'mean',
'profit': 'mean',
'flight_occupancy': 'mean'
})
grouped_by_city.profit = grouped_by_city.profit.astype('int64')
grouped_by_city
data = df.groupby(['flight_id', 'city', 'city.1', 'model'], as_index=False).agg({
'longitude': 'mean',
'latitude': 'mean',
'longitude.1': 'mean',
'latitude.1': 'mean',
'flight_duration': 'mean'
})
fig = go.Figure()
fig.add_trace(
go.Scattergeo(
lat = [data['latitude.1'][0]],
lon = [data['longitude.1'][0]],
mode = "text",
text=f"{data['city.1'][0]}",
showlegend=False
)
)
colors=['orange', 'purple']
for city in data.city.unique():
fig.add_trace(
go.Scattergeo(
lat = [data.loc[data.city == city, 'latitude'].values[0]],
lon = [data.loc[data.city == city, 'longitude'].values[0]],
mode = "text",
text=city,
showlegend=False,
)
)
opacity = data.loc[data.city == city, :].shape[0] / data.shape[0]
if city == 'Moscow':
col = colors[0]
else:
col = colors[1]
fig.add_trace(
go.Scattergeo(
lat = [data.loc[data.city == city, 'latitude.1'].values[0], data.loc[data.city == city, 'latitude'].values[0]],
lon = [data.loc[data.city == city, 'longitude.1'].values[0], data.loc[data.city == city, 'longitude'].values[0]],
mode = "lines",
opacity=opacity,
line = dict(width = 2, color=col),
name=f"Аnapa - {city} ({round(data.loc[data.city == city, 'flight_duration'].mean(), 1)} ч)"
)
)
fig.update_layout(
showlegend = True,
geo = go.layout.Geo(
scope = 'europe',
projection_type = 'natural earth',
showland = True,
showcountries=True,
showrivers=True,
showlakes=True,
lakecolor='lightblue',
rivercolor='lightblue',
landcolor = 'rgb(243, 243, 243)',
countrycolor = 'rgb(204, 204, 204)',
),
title=dict(
text="Карта полетов из Анапы за зимний период 2017 года",
font=dict(
size=16,
),
y=0.92,
x=0.5,
xanchor='center',
yanchor='top'
),
)
fig.update_geos(lataxis_showgrid=True, lonaxis_showgrid=True)
fig.update_layout(height=500, margin={"r":0,"t":50,"l":0,"b":0})
fig.show()
###Output
_____no_output_____
###Markdown
Сгрупируем данные для последующего анализа.
###Code
data_for_analysis = df.groupby(
['flight_id', 'city', 'city.1', 'longitude', 'latitude', 'longitude.1', 'latitude.1', 'flight_duration', 'model',
'fuel_cost'],
as_index=False)\
.agg({'amount':['sum', 'count'], 'seats_count': 'mean'})
data_for_analysis.columns = list(map('_'.join, data_for_analysis.columns.values))
data_for_analysis.columns = [col[:-1] if col[-1] == '_' else col for col in data_for_analysis.columns.values]
###Output
_____no_output_____
###Markdown
Найдем заполняемость самолета и прибыль от полета как разницу между суммой денег от продажи билетов и стоимостью расхода топлива (кг) за час.
###Code
data_for_analysis['flight_occupancy'] = data_for_analysis.amount_count / data_for_analysis.seats_count_mean
data_for_analysis['profit'] = data_for_analysis.amount_sum - data_for_analysis.fuel_cost
data_for_analysis.profit = data_for_analysis.profit.astype('int64')
data_for_analysis.sort_values('profit', inplace=True)
data_for_analysis
sns.histplot()
sns.histplot(data=data_for_analysis, x='profit', hue='model')
plt.ticklabel_format(useOffset=False, style='plain')
plt.title('Распределение прибыли от продажи билетов');
# plt.savefig('Распределение прибыли от продажи билетов.png');
###Output
_____no_output_____
###Markdown
На данном графике мы отчетливо видим разрыв в распределении данных по доходу от перелетов. Однако, т.к. в этом случае мы анализируем данные от двух разных моделей самолетов, то чтобы делать какие-то выводы дальше, нам нужно проанализировать рейсы на заполняемость салона.
###Code
# data_for_analysis.flight_occupancy.value_counts(bins=7).sort_index()
sns.histplot(data=data_for_analysis, x='flight_occupancy', hue='model', multiple="dodge")
plt.title('Распределение заполняемости салона');
# plt.savefig('Распределение заполняемости салона.png');
###Output
_____no_output_____
###Markdown
На графике распределения заполняемости салона для малоприбыльных рейсов мы видим, что она в большинстве случаев превышает 75%. Поэтому отфильтруем полеты, заполняемость самолета в которых меньше 75%.
###Code
threshold_occupancy = 0.75
low_occupancy_flights = data_for_analysis[data_for_analysis.flight_occupancy < threshold_occupancy]
low_occupancy_flights
low_occupancy_flights.flight_id.values.tolist()
###Output
_____no_output_____
###Markdown
Найденные ID рейсов: 136642, 136807, 136122, 136360 - по предварительным результатам относятся к малоприбильным и слабозаполняемым. Чтобы подтвердить эту теорию, проанализируем зависимость прибыльности и заполненности салона для двух направлений.
###Code
sns.scatterplot(y='profit', x='flight_occupancy', hue='city',
data=data_for_analysis[data_for_analysis.flight_occupancy >= threshold_occupancy])
sns.scatterplot(y='profit', x='flight_occupancy', hue='city', alpha=0.6,
data=data_for_analysis[data_for_analysis.flight_occupancy < threshold_occupancy], legend=False)
plt.title('Зависимость прибыли от заполненности салона самолета', pad=20)
plt.axvline(0.75, linestyle='--', alpha=0.7);
# plt.savefig('Зависимость прибыли от заполненности салона самолета.png', dpi=1200)
###Output
_____no_output_____
###Markdown
График зависимости прибыли от заполненности салона самолета для двух направлений хорошо показывает, что рейсы в Москву по заполняемости лежат почти на установленном нами пределе 75%. Поэтому по моему прогнозу стоит отказаться от рейсов 136642, 136807 в Белгород.
###Code
low_occupancy_flights[low_occupancy_flights.flight_id.isin([136642, 136807])]
###Output
_____no_output_____ |
algorithms/eeg-classification/notebooks/2-run-compute.ipynb | ###Markdown
Acquire datatokens for data and algorithm For compute-to-data, we need at least one data token and one algorithm token. Let's check if we have any of the required data tokens in our wallet.
###Code
token_address = data_token.address
from ocean_lib.web3_internal.currency import pretty_ether_and_wei
print(f"Data Scientist has {pretty_ether_and_wei(data_token.balanceOf(wallet.address), data_token.symbol())} data tokens.")
###Output
Data Scientist has 0 JUDTUR-75 (0 wei) data tokens.
###Markdown
You won't have any the first time you run this code (or after you run a compute job). We can either purchase some data tokens using the Ocean marketplace app or using the Python API. There are 2 options for publishing datasets on the Ocean marketplace. You can publish with fixed price or dynamic pricing. For simplicity, we have published the BCI dataset with fixed price. The code below is taken from the ocean.py tutorial for buying data tokens with [fixed price](https://github.com/oceanprotocol/ocean.py/blob/8087ca8d7bfcd489fead45b59cdf5021d21e2d9d/READMEs/fixed-rate-exchange-flow.md).
###Code
#Search for exchange_id from a specific block retrieved at 3rd step
#for a certain data token address (e.g. token_address).
logs = ocean.exchange.search_exchange_by_data_token(token_address)
#E.g. First exchange is the wanted one.
exchange_id = logs[0].args.exchangeId
from ocean_lib.web3_internal.currency import to_wei
tx_result = ocean.exchange.buy_at_fixed_rate(to_wei(1), wallet, to_wei(5), exchange_id, token_address)
assert tx_result, "failed buying data tokens"
print(f"Data Scientist has {pretty_ether_and_wei(data_token.balanceOf(wallet.address), data_token.symbol())} data tokens.")
###Output
Data Scientist has 1 JUDTUR-75 (1000000000000000000 wei) data tokens.
###Markdown
Let's purchase some algorithm tokens.
###Code
ALG_ddo = ocean.assets.resolve("did:op:617e7d2c21A99DB19A0435B1C704d4494c6115de")
alg_token = ocean.get_data_token(ALG_ddo.data_token_address)
print(f"Alg token info = '{ALG_ddo.values['dataTokenInfo']}'")
print(f"Alg name = '{ALG_ddo.metadata['main']['name']}'")
print(f"Data Scientist has {pretty_ether_and_wei(alg_token.balanceOf(wallet.address), alg_token.symbol())} algorithm tokens.")
###Output
Data Scientist has 58.1 BCITEST1 (58132857142857142900 wei) algorithm tokens.
###Markdown
Start compute jobOnly inputs needed: DATA_did, ALG_did. Everything else can get computed as needed.
###Code
DATA_did = DATA_ddo.did
compute_service = DATA_ddo.get_service('compute')
from ocean_lib.web3_internal.constants import ZERO_ADDRESS
from ocean_lib.models.compute_input import ComputeInput
# order & pay for dataset
dataset_order_requirements = ocean.assets.order(
DATA_did, wallet.address, service_type=compute_service.type
)
DATA_order_tx_id = ocean.assets.pay_for_service(
ocean.web3,
dataset_order_requirements.amount,
dataset_order_requirements.data_token_address,
DATA_did,
compute_service.index,
ZERO_ADDRESS,
wallet,
dataset_order_requirements.computeAddress,
)
###Output
_____no_output_____
###Markdown
If a cell shows an error, try to run it again.
###Code
ALG_did = ALG_ddo.did
algo_service = ALG_ddo.get_service('access')
# order & pay for algo
algo_order_requirements = ocean.assets.order(
ALG_did, wallet.address, service_type=algo_service.type
)
ALG_order_tx_id = ocean.assets.pay_for_service(
ocean.web3,
algo_order_requirements.amount,
algo_order_requirements.data_token_address,
ALG_did,
algo_service.index,
ZERO_ADDRESS,
wallet,
algo_order_requirements.computeAddress,
)
compute_inputs = [ComputeInput(DATA_did, DATA_order_tx_id, compute_service.index)]
job_id = ocean.compute.start(
compute_inputs,
wallet,
algorithm_did=ALG_did,
algorithm_tx_id=ALG_order_tx_id,
algorithm_data_token=alg_token.address
)
print(f"Started compute job with id: {job_id}")
###Output
Started compute job with id: 9e5f4fbf7e7f4265a064983f37fffea9
###Markdown
Monitor logs / algorithm outputYou can check the job status as many times as needed:
###Code
status_dict = ocean.compute.status(DATA_did, job_id, wallet)
while status_dict['statusText'] != 'Job finished':
status_dict = ocean.compute.status(DATA_did, job_id, wallet)
print(status_dict)
time.sleep(1)
###Output
{'ok': True, 'status': 20, 'statusText': 'Configuring volumes'}
{'ok': True, 'status': 20, 'statusText': 'Configuring volumes'}
{'ok': True, 'status': 20, 'statusText': 'Configuring volumes'}
{'ok': True, 'status': 40, 'statusText': 'Running algorithm '}
{'ok': True, 'status': 40, 'statusText': 'Running algorithm '}
{'ok': True, 'status': 60, 'statusText': 'Publishing results'}
{'ok': True, 'status': 60, 'statusText': 'Publishing results'}
{'ok': True, 'status': 60, 'statusText': 'Publishing results'}
{'ok': True, 'status': 70, 'statusText': 'Job finished'}
###Markdown
This will output the status of the current job.Here is a list of possible results: [Operator Service Status description](https://github.com/oceanprotocol/operator-service/blob/main/API.mdstatus-description).Once you get `{'ok': True, 'status': 70, 'statusText': 'Job finished'}`, Bob can check the result of the job.
###Code
result = ocean.compute.result_file(DATA_did, job_id, 0, wallet) # 0 index, means we retrieve the results from the first dataset index
###Output
_____no_output_____
###Markdown
Sometimes the result is empty. When this happens, I just start the compute job again.
###Code
str(result).split('\\n')
###Output
_____no_output_____ |
multi-model-register-and-deploy.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Deploy Multiple Models as WebserviceThis example shows how to deploy a Webservice with multiple models in step-by-step fashion: 1. Register Models 2. Deploy Models as Webservice PrerequisitesIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't.
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
SDK version: 1.22.0
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
###Output
fmoz-workspace
ml
westus2
421b563f-a977-42aa-8934-f41ca5664b73
###Markdown
Register Models In this example, we will be using and registering two models. First we will train two simple models on the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.htmldiabetes-dataset) included with scikit-learn, serializing them to files in the current directory.
###Code
import joblib
import sklearn
from sklearn.datasets import load_diabetes
from sklearn.linear_model import BayesianRidge, Ridge
x, y = load_diabetes(return_X_y=True)
first_model = Ridge().fit(x, y)
second_model = BayesianRidge().fit(x, y)
joblib.dump(first_model, "first_model.pkl")
joblib.dump(second_model, "second_model.pkl")
print("Trained models using scikit-learn {}.".format(sklearn.__version__))
###Output
Trained models using scikit-learn 0.22.2.post1.
###Markdown
Now that we have our trained models locally, we will register them as Models with the names `my_first_model` and `my_second_model` in the workspace.
###Code
from azureml.core.model import Model
my_model_1 = Model.register(model_path="first_model.pkl",
model_name="my_first_model",
workspace=ws)
my_model_2 = Model.register(model_path="second_model.pkl",
model_name="my_second_model",
workspace=ws)
###Output
Registering model my_first_model
Registering model my_second_model
###Markdown
Write the Entry ScriptWrite the script that will be used to predict on your models Model.get_model_path()To get the paths of your models, use `Model.get_model_path(model_name, version=None, _workspace=None)` method. This method will find the path to a model using the name of the model registered under the workspace.In this example, we do not use the optional arguments `version` and `_workspace`. Using environment variable AZUREML_MODEL_DIRIn other [examples](../deploy-to-cloud/score.py) with a single model deployment, we use the environment variable `AZUREML_MODEL_DIR` and model file name to get the model path. For single model deployments, this environment variable is the path to the model folder (`./azureml-models/$MODEL_NAME/$VERSION`). When we deploy multiple models, the environment variable is set to the folder containing all models (./azureml-models).If you're using multiple models and you know the versions of the models you deploy, you can use this method to get the model path:```python Construct the model path using the registered model name, version, and model file namemodel_1_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_first_model', '1', 'first_model.pkl')```
###Code
%%writefile score.py
import joblib
import json
import numpy as np
from azureml.core.model import Model
def init():
global model_1, model_2
# Here "my_first_model" is the name of the model registered under the workspace.
# This call will return the path to the .pkl file on the local disk.
model_1_path = Model.get_model_path(model_name='my_first_model')
model_2_path = Model.get_model_path(model_name='my_second_model')
# Deserialize the model files back into scikit-learn models.
model_1 = joblib.load(model_1_path)
model_2 = joblib.load(model_2_path)
# Note you can pass in multiple rows for scoring.
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = np.array(data)
# Call predict() on each model
result_1 = model_1.predict(data)
result_2 = model_2.predict(data)
# You can return any JSON-serializable value.
return {"prediction1": result_1.tolist(), "prediction2": result_2.tolist()}
except Exception as e:
result = str(e)
return result
###Output
Overwriting score.py
###Markdown
Create Environment You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb).
###Code
from azureml.core import Environment
env = Environment("deploytocloudenv")
env.python.conda_dependencies.add_pip_package("joblib")
env.python.conda_dependencies.add_pip_package("numpy")
env.python.conda_dependencies.add_pip_package("scikit-learn=={}".format(sklearn.__version__))
###Output
_____no_output_____
###Markdown
Create Inference ConfigurationThere is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.Note: in that case, environments's entry_script and file_path are relative paths to the source_directory path; myenv.docker.base_dockerfile is a string containing extra docker steps or contents of the docker file.Sample code for using a source directory:```pythonfrom azureml.core.environment import Environmentfrom azureml.core.model import InferenceConfigmyenv = Environment.from_conda_specification(name='myenv', file_path='env/myenv.yml') explicitly set base_image to None when setting base_dockerfilemyenv.docker.base_image = None add extra docker commends to executemyenv.docker.base_dockerfile = "FROM ubuntu\n RUN echo \"hello\""inference_config = InferenceConfig(source_directory="C:/abc", entry_script="x/y/score.py", environment=myenv)``` - file_path: input parameter to Environment constructor. Manages conda and python package dependencies. - env.docker.base_dockerfile: any extra steps you want to inject into docker file - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder - entry_script: contains logic specific to initializing your model and running predictions
###Code
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=env)
###Output
_____no_output_____
###Markdown
Deploy Model as Webservice on Azure Container InstanceNote that the service creation can take few minutes.
###Code
from azureml.core.webservice import AciWebservice
aci_service_name = "aciservice-multimodel"
deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)
service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config, overwrite=True)
service.wait_for_deployment(True)
print(service.state)
###Output
Tips: You can try get_logs(): https://aka.ms/debugimage#dockerlog or local deployment: https://aka.ms/debugimage#debug-locally to debug if deployment takes longer than 10 minutes.
Running.....................................................
###Markdown
Test web service
###Code
import json
test_sample = json.dumps({'data': x[0:2].tolist()})
prediction = service.run(test_sample)
print(prediction)
###Output
_____no_output_____
###Markdown
Delete ACI to clean up
###Code
service.delete()
###Output
_____no_output_____ |
Model backlog/Train/35-tweet-train-distilbert-public.ipynb | ###Markdown
Dependencies
###Code
import json
from tweet_utility_scripts import *
from transformers import TFDistilBertModel, DistilBertConfig
from tokenizers import BertWordPieceTokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, GlobalMaxPooling1D, Concatenate
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/tweet-dataset-split-distilbert-uncased-128/'
hold_out = pd.read_csv(database_base_path + 'hold-out.csv')
train = hold_out[hold_out['set'] == 'train']
validation = hold_out[hold_out['set'] == 'validation']
display(hold_out.head())
# Unzip files
!tar -xvf /kaggle/input/tweet-dataset-split-distilbert-uncased-128/hold_out.tar.gz
base_data_path = 'hold_out/'
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
# Delete data dir
shutil.rmtree(base_data_path)
###Output
_____no_output_____
###Markdown
Model parameters
###Code
tokenizer_path = database_base_path + 'vocab.txt'
base_path = '/kaggle/input/qa-transformers/distilbert/'
model_path = 'model.h5'
config = {
"MAX_LEN": 128,
"BATCH_SIZE": 16,
"EPOCHS": 5,
"LEARNING_RATE": 5e-5,
"ES_PATIENCE": 2,
"question_size": 3,
"base_model_path": base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5',
"config_path": base_path + 'distilbert-base-uncased-distilled-squad-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = DistilBertConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids')
base_model = TFDistilBertModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})
last_state = sequence_output[0]
x = GlobalAveragePooling1D()(last_state)
y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x)
y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x)
model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end])
model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
loss=losses.CategoricalCrossentropy(),
metrics=[metrics.CategoricalAccuracy()])
return model
model = model_fn(config['MAX_LEN'])
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
attention_mask (InputLayer) [(None, 128)] 0
__________________________________________________________________________________________________
input_ids (InputLayer) [(None, 128)] 0
__________________________________________________________________________________________________
token_type_ids (InputLayer) [(None, 128)] 0
__________________________________________________________________________________________________
base_model (TFDistilBertModel) ((None, 128, 768),) 66362880 attention_mask[0][0]
input_ids[0][0]
token_type_ids[0][0]
__________________________________________________________________________________________________
global_average_pooling1d (Globa (None, 768) 0 base_model[0][0]
__________________________________________________________________________________________________
y_start (Dense) (None, 128) 98432 global_average_pooling1d[0][0]
__________________________________________________________________________________________________
y_end (Dense) (None, 128) 98432 global_average_pooling1d[0][0]
==================================================================================================
Total params: 66,559,744
Trainable params: 66,559,744
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Train
###Code
tb_callback = TensorBoard(log_dir='./')
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
history = model.fit(list(x_train), list(y_train),
validation_data=(list(x_valid), list(y_valid)),
callbacks=[es, tb_callback],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=1).history
model.save_weights(model_path)
# Compress logs dir
!tar -cvzf train.tar.gz train
!tar -cvzf validation.tar.gz validation
# Delete logs dir
if os.path.exists('/kaggle/working/train/'):
shutil.rmtree('/kaggle/working/train/')
if os.path.exists('/kaggle/working/validation/'):
shutil.rmtree('/kaggle/working/validation/')
###Output
train/
train/plugins/
train/plugins/profile/
train/plugins/profile/2020-04-12_15-12-53/
train/plugins/profile/2020-04-12_15-12-53/local.trace
train/events.out.tfevents.1586704373.ca30d9309378.profile-empty
train/events.out.tfevents.1586704367.ca30d9309378.13.5413.v2
validation/
validation/events.out.tfevents.1586704567.ca30d9309378.13.32508.v2
###Markdown
Model loss graph
###Code
sns.set(style="whitegrid")
plot_metrics(history, metric_list=['loss', 'y_start_loss', 'y_end_loss',
'y_start_categorical_accuracy', 'y_end_categorical_accuracy'])
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = BertWordPieceTokenizer(tokenizer_path , lowercase=True)
tokenizer.save('./')
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
train_preds = model.predict(list(x_train))
valid_preds = model.predict(list(x_valid))
train['start'] = train_preds[0].argmax(axis=-1)
train['end'] = train_preds[1].argmax(axis=-1)
train["end"].clip(0, train["text_len"], inplace=True)
train["start"].clip(0, train["end"], inplace=True)
train['prediction'] = train.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
train["prediction"].fillna('', inplace=True)
validation['start'] = valid_preds[0].argmax(axis=-1)
validation['end'] = valid_preds[1].argmax(axis=-1)
validation["end"].clip(0, validation["text_len"], inplace=True)
validation["start"].clip(0, validation["end"], inplace=True)
validation['prediction'] = validation.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
validation["prediction"].fillna('', inplace=True)
display(evaluate_model(train, validation))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
print('Train set')
display(train.head(10))
print('Validation set')
display(validation.head(10))
###Output
Train set
|
Bank Marketing/Kaggle_Bank_Marketing_csv.ipynb | ###Markdown
Title
###Code
# Análise de base de dados do Kaggle
# Bank Marketing
###Output
_____no_output_____
###Markdown
Head
###Code
# by geanclm in 01/03/2022 at 19:15h
###Output
_____no_output_____
###Markdown
Local files
###Code
# arquivos utilizados
!dir
###Output
O volume na unidade C ‚ Windows
O N£mero de S‚rie do Volume ‚ 2656-7D0D
Pasta de C:\Users\geanc\Downloads\DATA SCIENCE\FLAI\MBA\Data Analytics\Bank Marketing
04/03/2022 12:15 <DIR> .
04/03/2022 12:15 <DIR> ..
01/03/2022 19:08 <DIR> .ipynb_checkpoints
01/03/2022 19:00 5.834.924 bank-additional-full__.csv
04/03/2022 12:15 1.766.937 bank-additional-full__.ods
01/03/2022 18:59 5.458 bank-additional-names.txt
04/03/2022 11:35 34.496 Kaggle_Bank_Marketing_csv.ipynb
4 arquivo(s) 7.641.815 bytes
3 pasta(s) 622.840.811.520 bytes dispon¡veis
###Markdown
Import libs
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Metadata
###Code
# 01 - age (numeric) = idade
# 02 - job : type of job (categorical) = profissão
# 03 - marital : marital status (categorical) = estado civil
# 04 - education (categorical) = escolaridade
# 05 - default: has credit in default? (categorical) = inadimplente
# 06 - housing: has housing loan? (categorical) = crédito imobiliário
# 07 - loan: has personal loan? (categorical) = crédito pessoal
# 08 - contact: contact communication type (categorical) = contato
# 09 - month: last contact month of year (categorical) = último mês de contato
# 10 - day_of_week: last contact day of the week (categorical) = último dia da semana de contato
# 11 - duration: last contact duration, in seconds (numeric) = duração em segundos do último contato
# 12 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
# 13 - pdays: (numeric) = número de dias desde o último contato
# 14 - previous: (numeric) = número de contatos feitos ao clientes antes da útima campanha
# 15 - poutcome: (categorical) = resultado da campanha
# 16 - emp.var.rate: (numeric) = tx de variação de emprego
# 17 - cons.price.idx: (numeric) = índice de preço ao consumidor (IPC)
# 18 - cons.conf.idx: (numeric) = índice de confiança do consumidor
# 19 - euribor3m: (numeric) = tx de 3 meses euribor
# 20 - nr.employed: (numeric) = número de empregados
# Output variable (desired target):
# 21 - y - has the client subscribed a term deposit? (binary: 'yes', 'no')
###Output
_____no_output_____
###Markdown
Import data
###Code
# fonte: https://www.kaggle.com/henriqueyamahata/bank-marketing?select=bank-additional-names.txt
df = pd.read_csv('bank-additional-full__.csv', sep=';')
df
df.info()
df['age'].mean()
# amostragem aleatória simples
df.sample(100)
# amostragem estratificada de 50 clients solteiros
df[df['marital']=='single'].sample(50)
# amostragem estratificada de 50 clientes casados
df[df['marital']=='married'].sample(50)
###Output
_____no_output_____ |
Spaceship-Preprocessing.ipynb | ###Markdown
Imputing Missing Values HomePlanet
###Code
df.HomePlanet.unique()
sns.countplot(x ='HomePlanet', data = df)
plt.show()
df.HomePlanet.ffill(axis=0,inplace=True)
###Output
_____no_output_____
###Markdown
Destination
###Code
df.Destination.unique()
sns.countplot(x ='Destination', data = df)
plt.show()
df.Destination.ffill(axis=0,inplace=True)
###Output
_____no_output_____
###Markdown
CryoSleep
###Code
sns.countplot(x ='CryoSleep', data = df)
plt.show()
## Replacing Null values with random value (True or False)
for i,n in enumerate(df.CryoSleep):
if(n!=True and n!=False):
val = random.choice([True,False])
df.loc[i,"CryoSleep"] = val
df.head()
###Output
_____no_output_____
###Markdown
We can find if the people belong to same family by grouping them by their last name. But here we are dropping the name column
###Code
df.drop('Name', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Cabin
###Code
Deck = [cab.split("/")[0] if type(cab)!= type(2.33) else np.nan for cab in df.Cabin]
Num = [cab.split("/")[1] if type(cab)!= type(2.33) else np.nan for cab in df.Cabin]
Side = [cab.split("/")[2] if type(cab)!= type(2.33) else np.nan for cab in df.Cabin]
df["Deck"] = Deck
df["Num"] = Num
df["Side"] = Side
df.drop('Cabin', axis=1, inplace=True)
df.Deck.unique()
sns.countplot(x ='Deck', data = df)
plt.show()
df.Deck.ffill(axis=0,inplace=True)
###Output
_____no_output_____
###Markdown
From the analysis we found that the range of seating number is from 0 to 1894. So we are checking and creating a list of numbers that are not assigned to anyone. We can say, these are the missing values(nan) so we can randomly assign values from the list
###Code
num_ls = list(range(0,1895))
for i in df.Num.unique():
if(type(i) == type(2.23)):
continue
if int(i) in num_ls:
num_ls.remove(int(i))
## Replacing Null values with random value (from num_ls)
for i,n in enumerate(df.Num):
if(type(n) == type(2.23)):
val = random.choice(num_ls)
df.loc[i,"Num"] = val
df.Side.unique()
sns.countplot(x ='Side', data = df)
plt.show()
## Replacing Null values with random value (True or False)
for i,n in enumerate(df.Side):
if(n!=True and n!=False):
val = random.choice(['P','S'])
df.loc[i,"Side"] = val
###Output
_____no_output_____
###Markdown
Analysing RoomService, FoodCourt, ShoppingMall, Spa, VRDeck We are imputing all the data with median value for the above columns as all the histogram looks similar. We are taking the median value"
###Code
sns.histplot(data = df,x = 'RoomService',bins=7)
plt.show()
df['RoomService'] = df['RoomService'].fillna(df['RoomService'].median())
df['FoodCourt'] = df['FoodCourt'].fillna(df['FoodCourt'].median())
df['ShoppingMall'] = df['ShoppingMall'].fillna(df['ShoppingMall'].median())
df['Spa'] = df['Spa'].fillna(df['Spa'].median())
df['VRDeck'] = df['VRDeck'].fillna(df['VRDeck'].median())
sns.countplot(x ='VIP', data = df)
plt.show()
df.VIP.value_counts()
df["Extra"] = df.RoomService + df.FoodCourt + df.ShoppingMall + df.Spa + df.VRDeck
df.head()
df.describe()
sns.histplot(data = df,x = 'Extra',bins=25)
plt.show()
df_vip = df[df.VIP==True]
df_vip.describe()
sns.histplot(data = df_vip,x = 'Extra')
plt.show()
df_general = df[df.VIP==False]
df_general.describe()
sns.histplot(data = df_general,x = 'Extra',bins=25)
plt.show()
## Replacing Null values
for i,n in enumerate(df.VIP):
if(type(n) == type(2.23)):
if(n >= 5000):
df.loc[i,"VIP"] = True
else:
df.loc[i,"VIP"] = False
Group_no = [int(idd.split("_")[1]) for idd in df.PassengerId]
df["Group_no"] = Group_no
###Output
_____no_output_____
###Markdown
Imputing the age column with most occuring value
###Code
df['Age'] = df['Age'].groupby(df['Group_no']).apply(lambda x: x.fillna(x.mean()))
df.drop('PassengerId', axis=1, inplace=True)
df.drop('Extra', axis=1, inplace=True)
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Converting categorical variables to numeric
###Code
df.CryoSleep = df.CryoSleep.map({True: 1, False: 0})
df.VIP = df.VIP.map({True: 1, False: 0})
df.Transported = df.Transported.map({True: 1, False: 0})
df.head()
df.HomePlanet.unique()
enc = OneHotEncoder(drop='first')
arrival = enc.fit_transform(df.HomePlanet.values.reshape(-1, 1)).toarray()
arrival = pd.DataFrame(arrival,columns=['Earth','Mars'])
df = pd.concat([df,arrival],axis=1)
df.drop('HomePlanet', axis=1, inplace=True)
df.Destination.unique()
enc2 = OneHotEncoder(drop='first')
destination = enc2.fit_transform(df.Destination.values.reshape(-1, 1)).toarray()
destination = pd.DataFrame(destination,columns=['D2','D3'])
df = pd.concat([df,destination],axis=1)
df.drop('Destination', axis=1, inplace=True)
df.Side.unique()
enc3 = OneHotEncoder(drop='first')
side = enc3.fit_transform(df.Side.values.reshape(-1, 1)).toarray()
side = pd.DataFrame(side,columns=['S'])
df = pd.concat([df,side],axis=1)
df.drop('Side', axis=1, inplace=True)
df.head()
df.Deck.unique()
sns.countplot(x ='Deck', data = df)
plt.show()
## Applying Mean encoding
mean_encoded = df.groupby("Deck")["Transported"].mean()
df["Deck"] = df["Deck"].map(dict(mean_encoded))
df.head()
df.Group_no.unique()
sns.countplot(x ='Group_no', data = df)
plt.show()
## Applying Mean encoding
mean_encoded = df.groupby("Group_no")["Transported"].mean()
df["Group_no"] = df["Group_no"].map(dict(mean_encoded))
df.head()
###Output
_____no_output_____
###Markdown
Scaling the data - StandardScalar
###Code
scaler = StandardScaler()
df[['Age','RoomService','FoodCourt','ShoppingMall','Spa','VRDeck']] = scaler.fit_transform(df[['Age','RoomService','FoodCourt','ShoppingMall','Spa','VRDeck']])
df.head()
###Output
_____no_output_____
###Markdown
Modelling Classification Model
###Code
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score
X = df.loc[:, df.columns != 'Transported']
y = df.Transported
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,random_state=223, stratify=y)
sns.countplot(x ='Transported', data = df)
plt.show()
model = LogisticRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
## Predicting
predict = model.predict(X_test)
accuracy_score(y_test, predict)
confusion_matrix(y_test, predict)/len(y_test)
###Output
_____no_output_____ |
econmt-probability-random.ipynb | ###Markdown
Probability and Random Processes IntroductionIn this chapter, you'll learn about how to use randomness and probability with code.If you're running this code (either by copying and pasting it, or by downloading it using the icons at the top of the page), you may need to the packages it uses by, for example, running `pip install packagename` on your computer's command line. (If you're not sure what a command line is, take a quick look at the basics of coding chapter.) ImportsFirst we need to import the packages we'll be using
###Code
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Set max rows displayed for readability
pd.set_option('display.max_rows', 6)
# Plot settings
plt.style.use("https://github.com/aeturrell/coding-for-economists/raw/main/plot_style.txt")
###Output
_____no_output_____
###Markdown
Probability (definitions)Let's get the jargon and definitions out of the way first, then we'll find out a bit about random numbers in code, then we'll actually see how to *use* for probability!Any probabilistic event can be considered to have three components: the sample space of possible outcomes $\Omega$, the set of possible events $\mathcal{F}$, and a probability measure $P$. Furthermore, $A$ is often used to denote a subset of $\Omega$ and $A^c$ the complement of $A$, while individual events in $\Omega$ are $\omega$. In the classic example of rolling a 6-sided fair die once, $\Omega = \{1, 2, 3, 4, 5, 6\}$. If $A = \{1, 2, 3\}$ then, by definition of $\Omega$, $A^c = \{4, 5, 6\}$. The probability measure of any sample space satisfies $P(\Omega)=1$ and $P(\varnothing)$ = 0.The most important examples of probability that arise in economics are **continuous random variables** and **discrete random variables**. A random variable is a function $X: \Omega \rightarrow \mathbb{R}$ such that $\{ \omega \in \Omega: X(w) \leq x\} \in \mathcal{F}$ for each $x\in\mathbb{R}$. All this is saying is that for every possible outcome, the random variable is a mapping of that outcome into a well-defined space of real numbers. It makes the connection between outcomes, events, and real numbers.Now we'll go on to more practical matters: discrete and continuous random variables. Discrete random variablesA random variable is discrete if it only takes values in a countable subset of $\mathbb{R}$; think the integers, or $x\in\{0, 1\}$. The distribution of such values is given by the **probability mass function**, or pmf. The pmf is an object that tells us the probabilty mass given to specific outcomes. The more precise defintion is$$p(x_i) = P(X=x_i) = P(\underbrace{\{\omega\in \Omega\ |\ X(\omega) = x_i\}}_{\text{set of outcomes resulting in}\ X=x_i}).$$It has a few key properties. $p:\mathbb{R} \rightarrow [0, 1]$, the probability of all outcomes sum to 1, ie $\displaystyle{\sum_{x_i} p(x_i)}=1$, the probabilities satisfy $p(x_i) \geq 0 \quad\forall x_i$, and $P(X \in A) = \displaystyle\sum_{x_i \in A} p(x_i)$. A fair six-sided die is the canonical example.Another useful object is the **cumulative distribution function**, which is defined generally as $\text{cdf}(x) = P(X \leq x)\quad \forall x \in \mathbb{R}$. For probability mass functions, this becomes$$\text{cdf}(x) = P(X\leq x) = \sum_{x_i\leq x} p(x_i)$$ Continuous random variablesContinuous random variables are functions such that $f: \mathbb{R} \rightarrow [0, \infty)$ is a **probability density**. Probability density functions are to continuous random variables what PMFs are to discrete random variables, though there are some important differences that can trip up even the most careful. They are defined as follows: the probability of $X$ taking a value betwen $a$ and $b$ is given by$$P(a \leq X \leq b) = \displaystyle\int_a^b f(x) dx$$where $f(x)\geq 0 \quad \forall x \in \mathbb{R}$, $f$ is piecewise continuous, and $\displaystyle\int_{-\infty}^\infty f(x) dx = 1$.The big mistake that people sometimes make is to think that $f(x)$ is a probability but it's not! The clue is in the name; $f(x)$ is a probability *density*, while $f(x) dx$ is a probability. This means you only get a probability from $f(x)$ once you integrate it. It also means that $f(x)$ has units of $1/x$. For example, if $x$ is wages, $f(x)$ has units of $\text{wages}^{-1}$.Cumulative distribution functions are also defined for pdfs:$$\text{cdf}(x) = P(X\leq x) = \int\limits^x_{-\infty}\! f(x')\, dx'$$ Distribution functionsLet's now see how code can help us when working with distributions, beginning with the probability mass function. As an example, let's take a look at the binomial distribution. This is defined as$$f(k; n, p) = \binom{n}{k} p^k q^{n-k}$$with $q=1-p$. Say we have a process with a 30% chance of success; $f$ tells us how likely it is to get $k$ successes out of $n$ independent trials.**scipy** has analytical functions for a really wide range of distributions and probability mass functions; you can [find them here](https://docs.scipy.org/doc/scipy/reference/stats.html). To get the binomial, we'll use `scipy.stats.binom`. There are two ways to call different distributions. You can declare a random variable object first, for example, `rv = binom(n, p)`, and then call `rv.pmf(k)` on it. Or you can call it all in one go via `binom.pmf(k, n, p)`. Here it is using the former:
###Code
n = 20
p = 0.3
rv = st.binom(n, p)
k = np.arange(0, 15)
# Plot
fig, ax = plt.subplots()
ax.plot(k, rv.pmf(k), 'bo', ms=8)
ax.vlines(k, 0, rv.pmf(k), colors='b', linestyles='-', lw=1)
ax.set_title(f'Binomial pmf: $n$={n}, $p$={p}', loc='left')
ax.set_xlabel('k')
ax.set_ylabel('Probability')
ax.set_xlim(0, None)
ax.set_ylim(0, None)
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, we can access the **cumulative distribution function**:
###Code
fig, ax = plt.subplots()
ax.plot(k, rv.cdf(k))
ax.scatter(k, rv.cdf(k), s=50)
ax.axhline(1, color='k', alpha=0.7, linestyle='-.', lw=1)
ax.set_title(f'Binomial cdf: $n$={n}, $p$={p}', loc='left')
ax.set_xlabel('k')
ax.set_ylabel('Probability')
ax.set_xlim(0, None)
ax.set_ylim(0, 1);
###Output
_____no_output_____
###Markdown
Of course, **continuous random variables** are also covered. To get a wide range of pdfs, the commands are `scipy.stats.distributionname.pdf(x, parameters=)`.Let's see a couple of examples. The lognormal distribution is given by $f(x, s) = \frac{1}{sx\sqrt{2\pi}}\exp\left(-\frac{\ln^2(x)}{2s^2}\right)$ and the gamma by $f(x, a) = \frac{x^{a-1}e^{-x}}{\Gamma(a)}$.
###Code
s = 0.5
a = 2
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.pdf(x, s), label=f'Lognormal: s={s}')
ax.plot(x, st.gamma.pdf(x, a), label=f'Gamma: a={a}')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
ax.set_ylim(0, 1)
ax.set_xlim(0, 6)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, to get the cdf for a given distribution, the command is `scipy.stats.distributionname.cdf(x, parameters=)`. Here are the ones for the lognormal and gamma.
###Code
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.cdf(x, s), label=f'Lognormal: s={s}')
ax.plot(x, st.gamma.cdf(x, a), label=f'Gamma: a={a}')
ax.set_xlabel('x')
ax.set_ylabel('CDF')
ax.set_ylim(0, 1.2)
ax.set_xlim(0, 6)
ax.axhline(1, color='k', alpha=0.7, linestyle='-.', lw=1)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Other properties of PMFs and PDFsA range of functions are available for PMFs and PDFs in addition to the ones we've seen already. For a pmf or pdf, we can call `median`, `mean`, `var`, `std`, and so on. Let's see an example with two of the most useful: interval and percentile.`interval(alpha, ...)` gives the endpoints of the range around the median that contain alpha percent of the distribution. `ppf(q, ...)` gives the quantiles of a given distribution, defined as $F(x) = P(X\leq x) = q$.
###Code
x = np.linspace(-4, 4, 500)
y = st.norm.pdf(x)
# Get percentiles
quantiles = [0.25, 0.5, 0.75]
probs = [st.norm.ppf(q) for q in quantiles]
# Interval
x1, x2 = st.norm.interval(0.95)
cut_x = x[((x>x1) & (x<x2))]
cut_y = y[((x>x1) & (x<x2))]
# Plot
fig, ax = plt.subplots()
ax.plot(x, y)
for i, prob in enumerate(probs):
ax.plot([prob, prob], [0, st.norm.pdf(prob)], lw=0.8, color='k', alpha=0.4)
ax.annotate(
f'q={quantiles[i]}',
xy=(prob, st.norm.pdf(prob)),
xycoords="data",
xytext=(-10, 30),
textcoords="offset points",
arrowprops=dict(
arrowstyle="->", connectionstyle="angle3,angleA=0,angleB=-90"
),
# fontsize=12,
)
ax.fill_between(cut_x, 0, cut_y, alpha=0.2, label=r'95% interval')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
ax.set_xlim(-4, 4)
ax.set_ylim(0, 0.55)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Randomness for computersComputers love instruction and hate ambiguity. As such, randomness is quite tricky for them. So tricky, that no computer is able to produce *perfectly* random numbers but instead only has a **pseudo-random number generator**, or PRNG. As far as humans go, these are pretty good and modern ones are so good that using them is unlikely to be an issue unless you really are working at the frontiers of the science of randomness.**numpy** uses a PRNG that's a 64-bit Permuted Congruential Generator, though you can access other generators too. Here's how to call it to generate $x \thicksim \mathcal{U}(0,1)$,
###Code
from numpy.random import default_rng
rng = default_rng()
rng.random(size=2)
###Output
_____no_output_____
###Markdown
In the above, `rng` is an object that you can call many random number generating functions on. Here we just asked for 2 values drawn from between 0 and 1. If you are using **pandas** for your analysis, then it comes with random sampling methods built in under the guise of `df.sample()` for a dataframe `df`. This has keywords for number of samples (`n`) **or** fraction of all rows to sample (`frac`) and whether to use `weights=`. You can also pass a PRNG to the `.sample()` method.Another really useful random generator provides integers and is called `integers`. Let's see this but in the case where we're asking for a more elaborately shaped output array, a 3x3x2 dimensional tensor:
###Code
min_int, max_int = 1, 20
rng.integers(min_int, max_int, size=(3, 3, 2))
###Output
_____no_output_____
###Markdown
One random function that is incredibly useful is `choice`, which returns a random selection from another type of object. Here, we show this by passing a list of letters and asking for two of them to be picked randomly:
###Code
rng.choice(['a', 'b', 'c', 'd', 'e', 'f'], size=2)
###Output
_____no_output_____
###Markdown
This choice can also be made with a given probability. Let's make a very large number of draws with an exponentially falling probability and see what we get!
###Code
num_draws = 1000
# Create 6 values spread across several orders of magnitude
prob = np.logspace(0, -3, num=6)
# Normalise this to 1
prob = prob/sum(prob)
# Choose the letters
letter_choices = rng.choice(['a', 'b', 'c', 'd', 'e', 'f'], size=num_draws, p=prob)
###Output
_____no_output_____
###Markdown
To make it easy to see what happened, we'll use the in-built collections library's `Counter` function to go from a long list of all of the letters to a dictionary of letters and counts of how frequently they occurred. We'd like to have the bars in order but `Counter` doesn't do that automatically, so we have to do a few things around the `counts` dictionary to change this.
###Code
from collections import Counter, OrderedDict
counts = OrderedDict(sorted(Counter(letter_choices).items()))
plt.bar(counts.keys(), counts.values());
###Output
_____no_output_____
###Markdown
As expected, 'a' was chosen many more times than 'b', and so on. In fact, if we divided the counts by `num_draws`, we would find that the probability of each letter was converging toward the probabilities we provided in `prob`.Another useful random function to know about is `shuffle`, and you can probably guess what it does! But note that it does the shuffling to the list you put in, rather than returning a new, modified list. Here's an example:
###Code
plain_list = ['This', 'list', 'is', 'well', 'ordered.']
rng.shuffle(plain_list)
plain_list
###Output
_____no_output_____
###Markdown
ReproducibilityIf you need to create random numbers reproducibly, then you can do it by setting a seed value like this:
###Code
from numpy.random import Generator, PCG64
seed_for_prng = 78557
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
###Output
_____no_output_____
###Markdown
The seed tells the generator where to start (PCG64 is the default generator), so by passing the same seed in we can make the random numbers begin in the same place. The `prng` above can also be passed to some functions as a keyword argument. Random numbers drawn from distributionsUsing **numpy**, we can draw samples from distributions using the `prng.distribution` syntax. One of the most common distributions you might like to draw from is the uniform, for example$$x \thicksim \mathcal{U}(0, 10)$$with, here, a minimum of 0 and a maximum of 10. Here's the code:
###Code
prng.uniform(low=0, high=10, size=3)
###Output
_____no_output_____
###Markdown
Let's see how to draw from one other important distribution function: the Gaussian, or normal, distribution $x \thicksim \mathcal{N}\left(\mu, \sigma\right)$ and check that it looks right. We'll actually do two different ones: a standard normal, with $\mu=0$ and $\sigma=1$, and a shifted, relaxed one with different parameters.
###Code
def gauss(x):
"""Analytical Gaussian."""
return (1/np.sqrt(2*np.pi))*np.exp(-0.5*x**2)
# Make the random draws
num_draws = 10000
vals = prng.standard_normal(num_draws)
# Get analytical solution
x_axis_vals = np.linspace(-3, 3, 300)
analytic_y = gauss(x_axis_vals)
# Random draws of shifted/flatter dist
mu = 0.5
sigma = 2
vals_shift = prng.normal(loc=mu, scale=sigma, size=num_draws)
fig, ax = plt.subplots()
ax.plot(x_axis_vals, analytic_y, label='Std norm: analytical', lw=3)
ax.hist(vals, bins=50, label='Std norm: generated', density=True, alpha=0.8)
ax.hist(vals_shift, bins=50, label=f'Norm: $\mu$={mu}, $\sigma$={sigma}', density=True, alpha=0.8)
ax.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
The Monte Carlo MethodMonte Carlo is the name of a part of Monaco that harbours a famous casino, yes, but it's also the name given to a bunch of techniques that rely on generating random numbers in order to solve problems. It's a really useful technique that entire textbooks cover and we can't hope to give it the love and attention it requires here, covering as it does Bayesian statistics, random walks, Markov switching models, Markov Chain Monte Carlo, bootstrapping, and optimisation! But what we can do is take a quick look at the very, very core code tools that can support these applications. The bottom line is that between the drawing of random variables from given **scipy** distributions we've already seen, the use of `prng.choice()`, and the use of `prng.uniform`, a lot of Monte Carlo methods are covered.We already covered drawing random numbers from distributions already included in **scipy**.`prng.uniform` is helpful in the following case: in the (extremely unlikely) event that there isn't a pre-built distribution available for a case where you know the analytical expression of a PDF and its CDF, a quick way to get random numbers distributed according to that PDF is to plug random numbers into the inverse cumulative distribution function. ie you plug random numbers $r$ into $\text{cdf}^{-1}(r)$ in order to generate $x \thicksim \text{pdf}$. The random numbers you plug in must come from a uniform distribution between 0 and 1.`prng.choice()` comes into its own for simulation, one of the many applications of Monte Carlo techniques in economics. (I mean simulation loosely here; it could be an agent-based model, it could be simulating an econometric relationship.) Let's do a very simple and canonical example of a simulation using `.choice()`: rolling a 6-sided die.....but we want to make it a *bit* more exciting than that! Let's see two die, one that's fair (equal probability of getting any value in 1 to 6) and one that's loaded (in this case, we'll make a 6 twice as likely as other values).For a naive estimate of the probability of a particular die score based on simulation, it's going to be$$\hat{p}_\omega = \frac{\text{Counts}_\omega}{\text{Total counts}}$$with $\omega \in \{1, 2, 3, 4, 5, 6\}$.To simulate this, we'll use the `choice` function fed with the six values, 1 to 6, on some dice. Then we'll count the occurrences of each, creating a dictionary of keys and values with `Counter`, and then plot those.To work out the (estimate of) probability based on the simulation, we've divided the number of throws per value by the total number of throws. You can see that with so many throws, there's quite a wedge between the chance of obtaining a six in both cases. Meanwhile, the fair die is converging to the dotted line, which is $1/6$. Note that because of the individual probabilities summing to unity, a higher probability of a six on the loaded die means that values 1 to 5 must have a lower probability than with the fair die; and you can see that emerging in the chart too.In doing this for every possible outcome, we're effectively estimating a probability mass function.
###Code
throws = 1000
die_vals = np.arange(1, 7)
probabilities = [1/7, 1/7, 1/7, 1/7, 1/7, 2/7]
fair_throws = prng.choice(die_vals, size=throws)
load_throws = prng.choice(die_vals, size=throws, p=probabilities)
def throw_list_to_array(throw_list):
# Count frequencies of what's in throw list but order the dictionary keys
counts_dict = OrderedDict(sorted(Counter(throw_list).items()))
# Turn the key value pairs into a numpy array
array = np.array([list(counts_dict.keys()), list(counts_dict.values())], dtype=float)
# Divide counts per value by num throws
array[1] = array[1]/len(throw_list)
return array
counts_fair = throw_list_to_array(fair_throws)
counts_load = throw_list_to_array(load_throws)
fig, ax = plt.subplots()
ax.scatter(counts_fair[0],
counts_fair[1],
color='b',
label='Fair')
ax.scatter(counts_load[0],
counts_load[1],
color='r',
label='Loaded')
ax.set_xlabel('Die value')
ax.set_ylabel('Probability')
ax.axhline(1/6, color='k', alpha=0.3, linestyle='-.', lw=0.5)
ax.legend(frameon=True, loc='upper left')
ax.set_ylim(0., 0.4);
###Output
_____no_output_____
###Markdown
Let's estimate the probability mass functions for our dice using the `cumsum` function:
###Code
fig, ax = plt.subplots()
ax.plot(counts_fair[0],
np.cumsum(counts_fair[1]),
color='b',
label='Fair',
marker='o',
ms=10)
ax.plot(counts_load[0],
np.cumsum(counts_load[1]),
color='r',
label='Loaded',
marker='o',
ms=10)
ax.set_xlabel('Die value')
ax.set_ylabel('Cumulative distribution function')
ax.axhline(1, color='k', alpha=0.3, linestyle='-.', lw=0.5)
ax.legend(frameon=True, loc='lower right')
ax.set_ylim(0., 1.2);
###Output
_____no_output_____
###Markdown
We can see that the cumulative distribution function also tells a story about what's going on; namely, there is a lower gradient up to $i=6$, followed by a higher gradient. The two distributions are visually distinct. Fitting a probability distributionOften we are in a situation where we are working with empirical data and we want to know if a particular distribution function provides a good fit for a variable. **scipy** has a neat 'fit' function that can do this for us, given a guess at a distribution. This fit is computed by maximising a log-likelihood function, with a penalty applied for samples outside of range of the distribution.Let's see this in action with an example using synthetic data created from a noisy normal distribution:
###Code
size = 1000
μ, σ = 2, 1.5
# Generate normally dist data
data = prng.normal(loc=μ, scale=σ, size=size)
# Add noise
data = data + prng.uniform(-.5, .5, size)
# Show first 5 entries
data[:5]
# Plot a histogram of the data and fit a normal distribution
sns.distplot(data, bins=60, kde=False, fit=st.norm)
# Get the fitted parameters as computed by scipy
(est_loc, est_scale) = st.norm.fit(data)
plt.legend([f'Est. normal dist. ($\hat{{\mu}}=${est_loc:.2f} and $\hat{{\sigma}}=${est_scale:.2f} )',
'Histogram'],
loc='upper left', frameon=False)
fig = plt.figure()
res = st.probplot(data, plot=plt)
plt.show();
###Output
_____no_output_____
###Markdown
Probability and Random Processes IntroductionIn this chapter, you'll learn about how to use randomness and probability with code.If you're running this code (either by copying and pasting it, or by downloading it using the icons at the top of the page), you may need to install the packages first. There's a brief guide on installing packages in the Chapter on {ref}`code-preliminaries`. ImportsFirst we need to import the packages we'll be using
###Code
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Set max rows displayed for readability
pd.set_option("display.max_rows", 6)
# Plot settings
plt.style.use(
"https://github.com/aeturrell/coding-for-economists/raw/main/plot_style.txt"
)
###Output
_____no_output_____
###Markdown
Probability (definitions)Let's get the jargon and definitions out of the way first, then we'll find out a bit about random numbers in code, then we'll actually see how to *use* random numbers for probability!Any probabilistic event can be considered to have three components: the sample space of possible outcomes $\Omega$, the set of possible events $\mathcal{F}$, and a probability measure $P$. Furthermore, $A$ is often used to denote a subset of $\Omega$ and $A^c$ the complement of $A$, while individual events in $\Omega$ are $\omega$. In the classic example of rolling a 6-sided fair die once, $\Omega = \{1, 2, 3, 4, 5, 6\}$. If $A = \{1, 2, 3\}$ then, by definition of $\Omega$, $A^c = \{4, 5, 6\}$. The probability measure of any sample space satisfies $P(\Omega)=1$ and $P(\varnothing)$ = 0.The most important examples of probability that arise in economics are **continuous random variables** and **discrete random variables**. A random variable is a function $X: \Omega \rightarrow \mathbb{R}$ such that $\{ \omega \in \Omega: X(w) \leq x\} \in \mathcal{F}$ for each $x\in\mathbb{R}$. All this is saying is that for every possible outcome, the random variable is a mapping of that outcome into a well-defined space of real numbers. It makes the connection between outcomes, events, and real numbers.Now we'll go on to more practical matters: discrete and continuous random variables. Discrete random variablesA random variable is discrete if it only takes values in a countable subset of $\mathbb{R}$; think the integers, or $x\in\{0, 1\}$. The distribution of such values is given by the **probability mass function**, or pmf. The pmf is an object that tells us the probabilty mass given to specific outcomes. The more precise defintion is$$p(x_i) = P(X=x_i) = P(\underbrace{\{\omega\in \Omega\ |\ X(\omega) = x_i\}}_{\text{set of outcomes resulting in}\ X=x_i}).$$It has a few key properties. $p:\mathbb{R} \rightarrow [0, 1]$, the probability of all outcomes sum to 1, ie $\displaystyle{\sum_{x_i} p(x_i)}=1$, the probabilities satisfy $p(x_i) \geq 0 \quad\forall x_i$, and $P(X \in A) = \displaystyle\sum_{x_i \in A} p(x_i)$. A fair six-sided die is the canonical example.Another useful object is the **cumulative distribution function**, which is defined generally as $\text{cdf}(x) = P(X \leq x)\quad \forall x \in \mathbb{R}$. For probability mass functions, this becomes$$\text{cdf}(x) = P(X\leq x) = \sum_{x_i\leq x} p(x_i)$$ Continuous random variablesContinuous random variables are functions such that $f: \mathbb{R} \rightarrow [0, \infty)$ is a **probability density**. Probability density functions are to continuous random variables what PMFs are to discrete random variables, though there are some important differences that can trip up even the most careful. They are defined as follows: the probability of $X$ taking a value betwen $a$ and $b$ is given by$$P(a \leq X \leq b) = \displaystyle\int_a^b f(x) dx$$where $f(x)\geq 0 \quad \forall x \in \mathbb{R}$, $f$ is piecewise continuous, and $\displaystyle\int_{-\infty}^\infty f(x) dx = 1$.The big mistake that people sometimes make is to think that $f(x)$ is a probability but it's not! The clue is in the name; $f(x)$ is a probability *density*, while $f(x) dx$ is a probability. This means you only get a probability from $f(x)$ once you integrate it. It also means that $f(x)$ has units of $1/x$. For example, if $x$ is wages, $f(x)$ has units of $\text{wages}^{-1}$.Cumulative distribution functions are also defined for pdfs:$$\text{cdf}(x) = P(X\leq x) = \int\limits^x_{-\infty}\! f(x')\, dx'$$ Distribution functionsLet's now see how code can help us when working with distributions, beginning with the probability mass function. As an example, let's take a look at the binomial distribution. This is defined as$$f(k; n, p) = \binom{n}{k} p^k q^{n-k}$$with $q=1-p$. Say we have a process with a 30% chance of success; $f$ tells us how likely it is to get $k$ successes out of $n$ independent trials.**scipy** has analytical functions for a really wide range of distributions and probability mass functions; you can [find them here](https://docs.scipy.org/doc/scipy/reference/stats.html). To get the binomial, we'll use `scipy.stats.binom`. There are two ways to call different distributions. You can declare a random variable object first, for example, `rv = binom(n, p)`, and then call `rv.pmf(k)` on it. Or you can call it all in one go via `binom.pmf(k, n, p)`. Here it is using the former:
###Code
n = 20
p = 0.3
rv = st.binom(n, p)
k = np.arange(0, 15)
# Plot
fig, ax = plt.subplots()
ax.plot(k, rv.pmf(k), "bo", ms=8)
ax.vlines(k, 0, rv.pmf(k), colors="b", linestyles="-", lw=1)
ax.set_title(f"Binomial pmf: $n$={n}, $p$={p}", loc="left")
ax.set_xlabel("k")
ax.set_ylabel("Probability")
ax.set_xlim(0, None)
ax.set_ylim(0, None)
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, we can access the **cumulative distribution function**:
###Code
fig, ax = plt.subplots()
ax.plot(k, rv.cdf(k))
ax.scatter(k, rv.cdf(k), s=50)
ax.axhline(1, color="k", alpha=0.7, linestyle="-.", lw=1)
ax.set_title(f"Binomial cdf: $n$={n}, $p$={p}", loc="left")
ax.set_xlabel("k")
ax.set_ylabel("Probability")
ax.set_xlim(0, None)
ax.set_ylim(0, 1);
###Output
_____no_output_____
###Markdown
Of course, **continuous random variables** are also covered. To get a wide range of pdfs, the commands are `scipy.stats.distributionname.pdf(x, parameters=)`.Let's see a couple of examples. The lognormal distribution is given by $f(x, s) = \frac{1}{sx\sqrt{2\pi}}\exp\left(-\frac{\ln^2(x)}{2s^2}\right)$ and the gamma by $f(x, a) = \frac{x^{a-1}e^{-x}}{\Gamma(a)}$.
###Code
s = 0.5
a = 2
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.pdf(x, s), label=f"Lognormal: s={s}")
ax.plot(x, st.gamma.pdf(x, a), label=f"Gamma: a={a}")
ax.set_xlabel("x")
ax.set_ylabel("PDF")
ax.set_ylim(0, 1)
ax.set_xlim(0, 6)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, to get the cdf for a given distribution, the command is `scipy.stats.distributionname.cdf(x, parameters=)`. Here are the ones for the lognormal and gamma.
###Code
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.cdf(x, s), label=f"Lognormal: s={s}")
ax.plot(x, st.gamma.cdf(x, a), label=f"Gamma: a={a}")
ax.set_xlabel("x")
ax.set_ylabel("CDF")
ax.set_ylim(0, 1.2)
ax.set_xlim(0, 6)
ax.axhline(1, color="k", alpha=0.7, linestyle="-.", lw=1)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Other properties of PMFs and PDFsA range of functions are available for PMFs and PDFs in addition to the ones we've seen already. For a pmf or pdf, we can call `median`, `mean`, `var`, `std`, and so on. Let's see an example with two of the most useful: interval and percentile.`interval(alpha, ...)` gives the endpoints of the range around the median that contain alpha percent of the distribution. `ppf(q, ...)` gives the quantiles of a given distribution, defined as $F(x) = P(X\leq x) = q$.
###Code
x = np.linspace(-4, 4, 500)
y = st.norm.pdf(x)
# Get percentiles
quantiles = [0.25, 0.5, 0.75]
probs = [st.norm.ppf(q) for q in quantiles]
# Interval
x1, x2 = st.norm.interval(0.95)
cut_x = x[((x > x1) & (x < x2))]
cut_y = y[((x > x1) & (x < x2))]
# Plot
fig, ax = plt.subplots()
ax.plot(x, y)
for i, prob in enumerate(probs):
ax.plot([prob, prob], [0, st.norm.pdf(prob)], lw=0.8, color="k", alpha=0.4)
ax.annotate(
f"q={quantiles[i]}",
xy=(prob, st.norm.pdf(prob)),
xycoords="data",
xytext=(-10, 30),
textcoords="offset points",
arrowprops=dict(arrowstyle="->", connectionstyle="angle3,angleA=0,angleB=-90"),
# fontsize=12,
)
ax.fill_between(cut_x, 0, cut_y, alpha=0.2, label=r"95% interval")
ax.set_xlabel("x")
ax.set_ylabel("PDF")
ax.set_xlim(-4, 4)
ax.set_ylim(0, 0.55)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Randomness for computersComputers love instruction and hate ambiguity. As such, randomness is quite tricky for them. So tricky, that no computer is able to produce *perfectly* random numbers but instead only has a **pseudo-random number generator**, or PRNG. As far as humans go, these are pretty good and modern ones are so good that using them is unlikely to be an issue unless you really are working at the frontiers of the science of randomness.**numpy** uses a PRNG that's a 64-bit Permuted Congruential Generator, though you can access other generators too. Here's how to call it to generate $x \thicksim \mathcal{U}(0,1)$,
###Code
from numpy.random import default_rng
rng = default_rng()
rng.random(size=2)
###Output
_____no_output_____
###Markdown
In the above, `rng` is an object that you can call many random number generating functions on. Here we just asked for 2 values drawn from between 0 and 1. If you are using **pandas** for your analysis, then it comes with random sampling methods built in under the guise of `df.sample()` for a dataframe `df`. This has keywords for number of samples (`n`) **or** fraction of all rows to sample (`frac`) and whether to use `weights=`. You can also pass a PRNG to the `.sample()` method.Another really useful random generator provides integers and is called `integers`. Let's see this but in the case where we're asking for a more elaborately shaped output array, a 3x3x2 dimensional tensor:
###Code
min_int, max_int = 1, 20
rng.integers(min_int, max_int, size=(3, 3, 2))
###Output
_____no_output_____
###Markdown
One random function that is incredibly useful is `choice`, which returns a random selection from another type of object. Here, we show this by passing a list of letters and asking for two of them to be picked randomly:
###Code
rng.choice(["a", "b", "c", "d", "e", "f"], size=2)
###Output
_____no_output_____
###Markdown
This choice can also be made with a given probability. Let's make a very large number of draws with an exponentially falling probability and see what we get!
###Code
num_draws = 1000
# Create 6 values spread across several orders of magnitude
prob = np.logspace(0, -3, num=6)
# Normalise this to 1
prob = prob / sum(prob)
# Choose the letters
letter_choices = rng.choice(["a", "b", "c", "d", "e", "f"], size=num_draws, p=prob)
###Output
_____no_output_____
###Markdown
To make it easy to see what happened, we'll use the in-built collections library's `Counter` function to go from a long list of all of the letters to a dictionary of letters and counts of how frequently they occurred. We'd like to have the bars in order but `Counter` doesn't do that automatically, so we have to do a few things around the `counts` dictionary to change this.
###Code
from collections import Counter, OrderedDict
counts = OrderedDict(sorted(Counter(letter_choices).items()))
plt.bar(counts.keys(), counts.values());
###Output
_____no_output_____
###Markdown
As expected, 'a' was chosen many more times than 'b', and so on. In fact, if we divided the counts by `num_draws`, we would find that the probability of each letter was converging toward the probabilities we provided in `prob`.Another useful random function to know about is `shuffle`, and you can probably guess what it does! But note that it does the shuffling to the list you put in, rather than returning a new, modified list. Here's an example:
###Code
plain_list = ["This", "list", "is", "well", "ordered."]
rng.shuffle(plain_list)
plain_list
###Output
_____no_output_____
###Markdown
ReproducibilityIf you need to create random numbers reproducibly, then you can do it by setting a seed value like this:
###Code
from numpy.random import Generator, PCG64
seed_for_prng = 78557
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
###Output
_____no_output_____
###Markdown
The seed tells the generator where to start (PCG64 is the default generator), so by passing the same seed in we can make the random numbers begin in the same place. The `prng` above can also be passed to some functions as a keyword argument. Random numbers drawn from distributionsUsing **numpy**, we can draw samples from distributions using the `prng.distribution` syntax. One of the most common distributions you might like to draw from is the uniform, for example$$x \thicksim \mathcal{U}(0, 10)$$with, here, a minimum of 0 and a maximum of 10. Here's the code:
###Code
prng.uniform(low=0, high=10, size=3)
###Output
_____no_output_____
###Markdown
Let's see how to draw from one other important distribution function: the Gaussian, or normal, distribution $x \thicksim \mathcal{N}\left(\mu, \sigma\right)$ and check that it looks right. We'll actually do two different ones: a standard normal, with $\mu=0$ and $\sigma=1$, and a shifted, relaxed one with different parameters.
###Code
def gauss(x):
"""Analytical Gaussian."""
return (1 / np.sqrt(2 * np.pi)) * np.exp(-0.5 * x ** 2)
# Make the random draws
num_draws = 10000
vals = prng.standard_normal(num_draws)
# Get analytical solution
x_axis_vals = np.linspace(-3, 3, 300)
analytic_y = gauss(x_axis_vals)
# Random draws of shifted/flatter dist
mu = 0.5
sigma = 2
vals_shift = prng.normal(loc=mu, scale=sigma, size=num_draws)
fig, ax = plt.subplots()
ax.plot(x_axis_vals, analytic_y, label="Std norm: analytical", lw=3)
ax.hist(vals, bins=50, label="Std norm: generated", density=True, alpha=0.8)
ax.hist(
vals_shift,
bins=50,
label=f"Norm: $\mu$={mu}, $\sigma$={sigma}",
density=True,
alpha=0.8,
)
ax.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
The Monte Carlo MethodMonte Carlo is the name of a part of Monaco that harbours a famous casino, yes, but it's also the name given to a bunch of techniques that rely on generating random numbers in order to solve problems. It's a really useful technique that entire textbooks cover and we can't hope to give it the love and attention it requires here, covering as it does Bayesian statistics, random walks, Markov switching models, Markov Chain Monte Carlo, bootstrapping, and optimisation! But what we can do is take a quick look at the very, very core code tools that can support these applications. The bottom line is that between the drawing of random variables from given **scipy** distributions we've already seen, the use of `prng.choice()`, and the use of `prng.uniform`, a lot of Monte Carlo methods are covered.We already covered drawing random numbers from distributions already included in **scipy**.`prng.uniform` is helpful in the following case: in the (extremely unlikely) event that there isn't a pre-built distribution available for a case where you know the analytical expression of a PDF and its CDF, a quick way to get random numbers distributed according to that PDF is to plug random numbers into the inverse cumulative distribution function. ie you plug random numbers $r$ into $\text{cdf}^{-1}(r)$ in order to generate $x \thicksim \text{pdf}$. The random numbers you plug in must come from a uniform distribution between 0 and 1.`prng.choice()` comes into its own for simulation, one of the many applications of Monte Carlo techniques in economics. (I mean simulation loosely here; it could be an agent-based model, it could be simulating an econometric relationship.) Let's do a very simple and canonical example of a simulation using `.choice()`: rolling a 6-sided die.....but we want to make it a *bit* more exciting than that! Let's see two die, one that's fair (equal probability of getting any value in 1 to 6) and one that's loaded (in this case, we'll make a 6 twice as likely as other values).For a naive estimate of the probability of a particular die score based on simulation, it's going to be$$\hat{p}_\omega = \frac{\text{Counts}_\omega}{\text{Total counts}}$$with $\omega \in \{1, 2, 3, 4, 5, 6\}$.To simulate this, we'll use the `choice` function fed with the six values, 1 to 6, on some dice. Then we'll count the occurrences of each, creating a dictionary of keys and values with `Counter`, and then plot those.To work out the (estimate of) probability based on the simulation, we've divided the number of throws per value by the total number of throws. You can see that with so many throws, there's quite a wedge between the chance of obtaining a six in both cases. Meanwhile, the fair die is converging to the dotted line, which is $1/6$. Note that because of the individual probabilities summing to unity, a higher probability of a six on the loaded die means that values 1 to 5 must have a lower probability than with the fair die; and you can see that emerging in the chart too.In doing this for every possible outcome, we're effectively estimating a probability mass function.
###Code
throws = 1000
die_vals = np.arange(1, 7)
probabilities = [1 / 7, 1 / 7, 1 / 7, 1 / 7, 1 / 7, 2 / 7]
fair_throws = prng.choice(die_vals, size=throws)
load_throws = prng.choice(die_vals, size=throws, p=probabilities)
def throw_list_to_array(throw_list):
# Count frequencies of what's in throw list but order the dictionary keys
counts_dict = OrderedDict(sorted(Counter(throw_list).items()))
# Turn the key value pairs into a numpy array
array = np.array(
[list(counts_dict.keys()), list(counts_dict.values())], dtype=float
)
# Divide counts per value by num throws
array[1] = array[1] / len(throw_list)
return array
counts_fair = throw_list_to_array(fair_throws)
counts_load = throw_list_to_array(load_throws)
fig, ax = plt.subplots()
ax.scatter(counts_fair[0], counts_fair[1], color="b", label="Fair")
ax.scatter(counts_load[0], counts_load[1], color="r", label="Loaded")
ax.set_xlabel("Die value")
ax.set_ylabel("Probability")
ax.axhline(1 / 6, color="k", alpha=0.3, linestyle="-.", lw=0.5)
ax.legend(frameon=True, loc="upper left")
ax.set_ylim(0.0, 0.4);
###Output
_____no_output_____
###Markdown
Let's estimate the probability mass functions for our dice using the `cumsum` function:
###Code
fig, ax = plt.subplots()
ax.plot(
counts_fair[0],
np.cumsum(counts_fair[1]),
color="b",
label="Fair",
marker="o",
ms=10,
)
ax.plot(
counts_load[0],
np.cumsum(counts_load[1]),
color="r",
label="Loaded",
marker="o",
ms=10,
)
ax.set_xlabel("Die value")
ax.set_ylabel("Cumulative distribution function")
ax.axhline(1, color="k", alpha=0.3, linestyle="-.", lw=0.5)
ax.legend(frameon=True, loc="lower right")
ax.set_ylim(0.0, 1.2);
###Output
_____no_output_____
###Markdown
We can see that the cumulative distribution function also tells a story about what's going on; namely, there is a lower gradient up to $i=6$, followed by a higher gradient. The two distributions are visually distinct. Fitting a probability distributionOften we are in a situation where we are working with empirical data and we want to know if a particular distribution function provides a good fit for a variable. **scipy** has a neat 'fit' function that can do this for us, given a guess at a distribution. This fit is computed by maximising a log-likelihood function, with a penalty applied for samples outside of range of the distribution.Let's see this in action with an example using synthetic data created from a noisy normal distribution:
###Code
size = 1000
μ, σ = 2, 1.5
# Generate normally dist data
data = prng.normal(loc=μ, scale=σ, size=size)
# Add noise
data = data + prng.uniform(-0.5, 0.5, size)
# Show first 5 entries
data[:5]
# Plot a histogram of the data and fit a normal distribution
sns.distplot(data, bins=60, kde=False, fit=st.norm)
# Get the fitted parameters as computed by scipy
(est_loc, est_scale) = st.norm.fit(data)
plt.legend(
[
f"Est. normal dist. ($\hat{{\mu}}=${est_loc:.2f} and $\hat{{\sigma}}=${est_scale:.2f} )",
"Histogram",
],
loc="upper left",
frameon=False,
)
fig = plt.figure()
res = st.probplot(data, plot=plt)
plt.show();
###Output
_____no_output_____
###Markdown
Probability and Random Processes IntroductionIn this chapter, you'll learn about how to use randomness and probability with code.If you're running this code (either by copying and pasting it, or by downloading it using the icons at the top of the page), you may need to the packages it uses by, for example, running `pip install packagename` on your computer's command line. (If you're not sure what a command line is, take a quick look at the basics of coding chapter.) ImportsFirst we need to import the packages we'll be using
###Code
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Set max rows displayed for readability
pd.set_option("display.max_rows", 6)
# Plot settings
plt.style.use(
"https://github.com/aeturrell/coding-for-economists/raw/main/plot_style.txt"
)
###Output
_____no_output_____
###Markdown
Probability (definitions)Let's get the jargon and definitions out of the way first, then we'll find out a bit about random numbers in code, then we'll actually see how to *use* for probability!Any probabilistic event can be considered to have three components: the sample space of possible outcomes $\Omega$, the set of possible events $\mathcal{F}$, and a probability measure $P$. Furthermore, $A$ is often used to denote a subset of $\Omega$ and $A^c$ the complement of $A$, while individual events in $\Omega$ are $\omega$. In the classic example of rolling a 6-sided fair die once, $\Omega = \{1, 2, 3, 4, 5, 6\}$. If $A = \{1, 2, 3\}$ then, by definition of $\Omega$, $A^c = \{4, 5, 6\}$. The probability measure of any sample space satisfies $P(\Omega)=1$ and $P(\varnothing)$ = 0.The most important examples of probability that arise in economics are **continuous random variables** and **discrete random variables**. A random variable is a function $X: \Omega \rightarrow \mathbb{R}$ such that $\{ \omega \in \Omega: X(w) \leq x\} \in \mathcal{F}$ for each $x\in\mathbb{R}$. All this is saying is that for every possible outcome, the random variable is a mapping of that outcome into a well-defined space of real numbers. It makes the connection between outcomes, events, and real numbers.Now we'll go on to more practical matters: discrete and continuous random variables. Discrete random variablesA random variable is discrete if it only takes values in a countable subset of $\mathbb{R}$; think the integers, or $x\in\{0, 1\}$. The distribution of such values is given by the **probability mass function**, or pmf. The pmf is an object that tells us the probabilty mass given to specific outcomes. The more precise defintion is$$p(x_i) = P(X=x_i) = P(\underbrace{\{\omega\in \Omega\ |\ X(\omega) = x_i\}}_{\text{set of outcomes resulting in}\ X=x_i}).$$It has a few key properties. $p:\mathbb{R} \rightarrow [0, 1]$, the probability of all outcomes sum to 1, ie $\displaystyle{\sum_{x_i} p(x_i)}=1$, the probabilities satisfy $p(x_i) \geq 0 \quad\forall x_i$, and $P(X \in A) = \displaystyle\sum_{x_i \in A} p(x_i)$. A fair six-sided die is the canonical example.Another useful object is the **cumulative distribution function**, which is defined generally as $\text{cdf}(x) = P(X \leq x)\quad \forall x \in \mathbb{R}$. For probability mass functions, this becomes$$\text{cdf}(x) = P(X\leq x) = \sum_{x_i\leq x} p(x_i)$$ Continuous random variablesContinuous random variables are functions such that $f: \mathbb{R} \rightarrow [0, \infty)$ is a **probability density**. Probability density functions are to continuous random variables what PMFs are to discrete random variables, though there are some important differences that can trip up even the most careful. They are defined as follows: the probability of $X$ taking a value betwen $a$ and $b$ is given by$$P(a \leq X \leq b) = \displaystyle\int_a^b f(x) dx$$where $f(x)\geq 0 \quad \forall x \in \mathbb{R}$, $f$ is piecewise continuous, and $\displaystyle\int_{-\infty}^\infty f(x) dx = 1$.The big mistake that people sometimes make is to think that $f(x)$ is a probability but it's not! The clue is in the name; $f(x)$ is a probability *density*, while $f(x) dx$ is a probability. This means you only get a probability from $f(x)$ once you integrate it. It also means that $f(x)$ has units of $1/x$. For example, if $x$ is wages, $f(x)$ has units of $\text{wages}^{-1}$.Cumulative distribution functions are also defined for pdfs:$$\text{cdf}(x) = P(X\leq x) = \int\limits^x_{-\infty}\! f(x')\, dx'$$ Distribution functionsLet's now see how code can help us when working with distributions, beginning with the probability mass function. As an example, let's take a look at the binomial distribution. This is defined as$$f(k; n, p) = \binom{n}{k} p^k q^{n-k}$$with $q=1-p$. Say we have a process with a 30% chance of success; $f$ tells us how likely it is to get $k$ successes out of $n$ independent trials.**scipy** has analytical functions for a really wide range of distributions and probability mass functions; you can [find them here](https://docs.scipy.org/doc/scipy/reference/stats.html). To get the binomial, we'll use `scipy.stats.binom`. There are two ways to call different distributions. You can declare a random variable object first, for example, `rv = binom(n, p)`, and then call `rv.pmf(k)` on it. Or you can call it all in one go via `binom.pmf(k, n, p)`. Here it is using the former:
###Code
n = 20
p = 0.3
rv = st.binom(n, p)
k = np.arange(0, 15)
# Plot
fig, ax = plt.subplots()
ax.plot(k, rv.pmf(k), "bo", ms=8)
ax.vlines(k, 0, rv.pmf(k), colors="b", linestyles="-", lw=1)
ax.set_title(f"Binomial pmf: $n$={n}, $p$={p}", loc="left")
ax.set_xlabel("k")
ax.set_ylabel("Probability")
ax.set_xlim(0, None)
ax.set_ylim(0, None)
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, we can access the **cumulative distribution function**:
###Code
fig, ax = plt.subplots()
ax.plot(k, rv.cdf(k))
ax.scatter(k, rv.cdf(k), s=50)
ax.axhline(1, color="k", alpha=0.7, linestyle="-.", lw=1)
ax.set_title(f"Binomial cdf: $n$={n}, $p$={p}", loc="left")
ax.set_xlabel("k")
ax.set_ylabel("Probability")
ax.set_xlim(0, None)
ax.set_ylim(0, 1);
###Output
_____no_output_____
###Markdown
Of course, **continuous random variables** are also covered. To get a wide range of pdfs, the commands are `scipy.stats.distributionname.pdf(x, parameters=)`.Let's see a couple of examples. The lognormal distribution is given by $f(x, s) = \frac{1}{sx\sqrt{2\pi}}\exp\left(-\frac{\ln^2(x)}{2s^2}\right)$ and the gamma by $f(x, a) = \frac{x^{a-1}e^{-x}}{\Gamma(a)}$.
###Code
s = 0.5
a = 2
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.pdf(x, s), label=f"Lognormal: s={s}")
ax.plot(x, st.gamma.pdf(x, a), label=f"Gamma: a={a}")
ax.set_xlabel("x")
ax.set_ylabel("PDF")
ax.set_ylim(0, 1)
ax.set_xlim(0, 6)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Likewise, to get the cdf for a given distribution, the command is `scipy.stats.distributionname.cdf(x, parameters=)`. Here are the ones for the lognormal and gamma.
###Code
x = np.linspace(0, 6, 500)
fig, ax = plt.subplots()
ax.plot(x, st.lognorm.cdf(x, s), label=f"Lognormal: s={s}")
ax.plot(x, st.gamma.cdf(x, a), label=f"Gamma: a={a}")
ax.set_xlabel("x")
ax.set_ylabel("CDF")
ax.set_ylim(0, 1.2)
ax.set_xlim(0, 6)
ax.axhline(1, color="k", alpha=0.7, linestyle="-.", lw=1)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Other properties of PMFs and PDFsA range of functions are available for PMFs and PDFs in addition to the ones we've seen already. For a pmf or pdf, we can call `median`, `mean`, `var`, `std`, and so on. Let's see an example with two of the most useful: interval and percentile.`interval(alpha, ...)` gives the endpoints of the range around the median that contain alpha percent of the distribution. `ppf(q, ...)` gives the quantiles of a given distribution, defined as $F(x) = P(X\leq x) = q$.
###Code
x = np.linspace(-4, 4, 500)
y = st.norm.pdf(x)
# Get percentiles
quantiles = [0.25, 0.5, 0.75]
probs = [st.norm.ppf(q) for q in quantiles]
# Interval
x1, x2 = st.norm.interval(0.95)
cut_x = x[((x > x1) & (x < x2))]
cut_y = y[((x > x1) & (x < x2))]
# Plot
fig, ax = plt.subplots()
ax.plot(x, y)
for i, prob in enumerate(probs):
ax.plot([prob, prob], [0, st.norm.pdf(prob)], lw=0.8, color="k", alpha=0.4)
ax.annotate(
f"q={quantiles[i]}",
xy=(prob, st.norm.pdf(prob)),
xycoords="data",
xytext=(-10, 30),
textcoords="offset points",
arrowprops=dict(arrowstyle="->", connectionstyle="angle3,angleA=0,angleB=-90"),
# fontsize=12,
)
ax.fill_between(cut_x, 0, cut_y, alpha=0.2, label=r"95% interval")
ax.set_xlabel("x")
ax.set_ylabel("PDF")
ax.set_xlim(-4, 4)
ax.set_ylim(0, 0.55)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Randomness for computersComputers love instruction and hate ambiguity. As such, randomness is quite tricky for them. So tricky, that no computer is able to produce *perfectly* random numbers but instead only has a **pseudo-random number generator**, or PRNG. As far as humans go, these are pretty good and modern ones are so good that using them is unlikely to be an issue unless you really are working at the frontiers of the science of randomness.**numpy** uses a PRNG that's a 64-bit Permuted Congruential Generator, though you can access other generators too. Here's how to call it to generate $x \thicksim \mathcal{U}(0,1)$,
###Code
from numpy.random import default_rng
rng = default_rng()
rng.random(size=2)
###Output
_____no_output_____
###Markdown
In the above, `rng` is an object that you can call many random number generating functions on. Here we just asked for 2 values drawn from between 0 and 1. If you are using **pandas** for your analysis, then it comes with random sampling methods built in under the guise of `df.sample()` for a dataframe `df`. This has keywords for number of samples (`n`) **or** fraction of all rows to sample (`frac`) and whether to use `weights=`. You can also pass a PRNG to the `.sample()` method.Another really useful random generator provides integers and is called `integers`. Let's see this but in the case where we're asking for a more elaborately shaped output array, a 3x3x2 dimensional tensor:
###Code
min_int, max_int = 1, 20
rng.integers(min_int, max_int, size=(3, 3, 2))
###Output
_____no_output_____
###Markdown
One random function that is incredibly useful is `choice`, which returns a random selection from another type of object. Here, we show this by passing a list of letters and asking for two of them to be picked randomly:
###Code
rng.choice(["a", "b", "c", "d", "e", "f"], size=2)
###Output
_____no_output_____
###Markdown
This choice can also be made with a given probability. Let's make a very large number of draws with an exponentially falling probability and see what we get!
###Code
num_draws = 1000
# Create 6 values spread across several orders of magnitude
prob = np.logspace(0, -3, num=6)
# Normalise this to 1
prob = prob / sum(prob)
# Choose the letters
letter_choices = rng.choice(["a", "b", "c", "d", "e", "f"], size=num_draws, p=prob)
###Output
_____no_output_____
###Markdown
To make it easy to see what happened, we'll use the in-built collections library's `Counter` function to go from a long list of all of the letters to a dictionary of letters and counts of how frequently they occurred. We'd like to have the bars in order but `Counter` doesn't do that automatically, so we have to do a few things around the `counts` dictionary to change this.
###Code
from collections import Counter, OrderedDict
counts = OrderedDict(sorted(Counter(letter_choices).items()))
plt.bar(counts.keys(), counts.values());
###Output
_____no_output_____
###Markdown
As expected, 'a' was chosen many more times than 'b', and so on. In fact, if we divided the counts by `num_draws`, we would find that the probability of each letter was converging toward the probabilities we provided in `prob`.Another useful random function to know about is `shuffle`, and you can probably guess what it does! But note that it does the shuffling to the list you put in, rather than returning a new, modified list. Here's an example:
###Code
plain_list = ["This", "list", "is", "well", "ordered."]
rng.shuffle(plain_list)
plain_list
###Output
_____no_output_____
###Markdown
ReproducibilityIf you need to create random numbers reproducibly, then you can do it by setting a seed value like this:
###Code
from numpy.random import Generator, PCG64
seed_for_prng = 78557
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
prng = Generator(PCG64(seed_for_prng))
prng.integers(0, 10, size=2)
###Output
_____no_output_____
###Markdown
The seed tells the generator where to start (PCG64 is the default generator), so by passing the same seed in we can make the random numbers begin in the same place. The `prng` above can also be passed to some functions as a keyword argument. Random numbers drawn from distributionsUsing **numpy**, we can draw samples from distributions using the `prng.distribution` syntax. One of the most common distributions you might like to draw from is the uniform, for example$$x \thicksim \mathcal{U}(0, 10)$$with, here, a minimum of 0 and a maximum of 10. Here's the code:
###Code
prng.uniform(low=0, high=10, size=3)
###Output
_____no_output_____
###Markdown
Let's see how to draw from one other important distribution function: the Gaussian, or normal, distribution $x \thicksim \mathcal{N}\left(\mu, \sigma\right)$ and check that it looks right. We'll actually do two different ones: a standard normal, with $\mu=0$ and $\sigma=1$, and a shifted, relaxed one with different parameters.
###Code
def gauss(x):
"""Analytical Gaussian."""
return (1 / np.sqrt(2 * np.pi)) * np.exp(-0.5 * x ** 2)
# Make the random draws
num_draws = 10000
vals = prng.standard_normal(num_draws)
# Get analytical solution
x_axis_vals = np.linspace(-3, 3, 300)
analytic_y = gauss(x_axis_vals)
# Random draws of shifted/flatter dist
mu = 0.5
sigma = 2
vals_shift = prng.normal(loc=mu, scale=sigma, size=num_draws)
fig, ax = plt.subplots()
ax.plot(x_axis_vals, analytic_y, label="Std norm: analytical", lw=3)
ax.hist(vals, bins=50, label="Std norm: generated", density=True, alpha=0.8)
ax.hist(
vals_shift,
bins=50,
label=f"Norm: $\mu$={mu}, $\sigma$={sigma}",
density=True,
alpha=0.8,
)
ax.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
The Monte Carlo MethodMonte Carlo is the name of a part of Monaco that harbours a famous casino, yes, but it's also the name given to a bunch of techniques that rely on generating random numbers in order to solve problems. It's a really useful technique that entire textbooks cover and we can't hope to give it the love and attention it requires here, covering as it does Bayesian statistics, random walks, Markov switching models, Markov Chain Monte Carlo, bootstrapping, and optimisation! But what we can do is take a quick look at the very, very core code tools that can support these applications. The bottom line is that between the drawing of random variables from given **scipy** distributions we've already seen, the use of `prng.choice()`, and the use of `prng.uniform`, a lot of Monte Carlo methods are covered.We already covered drawing random numbers from distributions already included in **scipy**.`prng.uniform` is helpful in the following case: in the (extremely unlikely) event that there isn't a pre-built distribution available for a case where you know the analytical expression of a PDF and its CDF, a quick way to get random numbers distributed according to that PDF is to plug random numbers into the inverse cumulative distribution function. ie you plug random numbers $r$ into $\text{cdf}^{-1}(r)$ in order to generate $x \thicksim \text{pdf}$. The random numbers you plug in must come from a uniform distribution between 0 and 1.`prng.choice()` comes into its own for simulation, one of the many applications of Monte Carlo techniques in economics. (I mean simulation loosely here; it could be an agent-based model, it could be simulating an econometric relationship.) Let's do a very simple and canonical example of a simulation using `.choice()`: rolling a 6-sided die.....but we want to make it a *bit* more exciting than that! Let's see two die, one that's fair (equal probability of getting any value in 1 to 6) and one that's loaded (in this case, we'll make a 6 twice as likely as other values).For a naive estimate of the probability of a particular die score based on simulation, it's going to be$$\hat{p}_\omega = \frac{\text{Counts}_\omega}{\text{Total counts}}$$with $\omega \in \{1, 2, 3, 4, 5, 6\}$.To simulate this, we'll use the `choice` function fed with the six values, 1 to 6, on some dice. Then we'll count the occurrences of each, creating a dictionary of keys and values with `Counter`, and then plot those.To work out the (estimate of) probability based on the simulation, we've divided the number of throws per value by the total number of throws. You can see that with so many throws, there's quite a wedge between the chance of obtaining a six in both cases. Meanwhile, the fair die is converging to the dotted line, which is $1/6$. Note that because of the individual probabilities summing to unity, a higher probability of a six on the loaded die means that values 1 to 5 must have a lower probability than with the fair die; and you can see that emerging in the chart too.In doing this for every possible outcome, we're effectively estimating a probability mass function.
###Code
throws = 1000
die_vals = np.arange(1, 7)
probabilities = [1 / 7, 1 / 7, 1 / 7, 1 / 7, 1 / 7, 2 / 7]
fair_throws = prng.choice(die_vals, size=throws)
load_throws = prng.choice(die_vals, size=throws, p=probabilities)
def throw_list_to_array(throw_list):
# Count frequencies of what's in throw list but order the dictionary keys
counts_dict = OrderedDict(sorted(Counter(throw_list).items()))
# Turn the key value pairs into a numpy array
array = np.array(
[list(counts_dict.keys()), list(counts_dict.values())], dtype=float
)
# Divide counts per value by num throws
array[1] = array[1] / len(throw_list)
return array
counts_fair = throw_list_to_array(fair_throws)
counts_load = throw_list_to_array(load_throws)
fig, ax = plt.subplots()
ax.scatter(counts_fair[0], counts_fair[1], color="b", label="Fair")
ax.scatter(counts_load[0], counts_load[1], color="r", label="Loaded")
ax.set_xlabel("Die value")
ax.set_ylabel("Probability")
ax.axhline(1 / 6, color="k", alpha=0.3, linestyle="-.", lw=0.5)
ax.legend(frameon=True, loc="upper left")
ax.set_ylim(0.0, 0.4);
###Output
_____no_output_____
###Markdown
Let's estimate the probability mass functions for our dice using the `cumsum` function:
###Code
fig, ax = plt.subplots()
ax.plot(
counts_fair[0],
np.cumsum(counts_fair[1]),
color="b",
label="Fair",
marker="o",
ms=10,
)
ax.plot(
counts_load[0],
np.cumsum(counts_load[1]),
color="r",
label="Loaded",
marker="o",
ms=10,
)
ax.set_xlabel("Die value")
ax.set_ylabel("Cumulative distribution function")
ax.axhline(1, color="k", alpha=0.3, linestyle="-.", lw=0.5)
ax.legend(frameon=True, loc="lower right")
ax.set_ylim(0.0, 1.2);
###Output
_____no_output_____
###Markdown
We can see that the cumulative distribution function also tells a story about what's going on; namely, there is a lower gradient up to $i=6$, followed by a higher gradient. The two distributions are visually distinct. Fitting a probability distributionOften we are in a situation where we are working with empirical data and we want to know if a particular distribution function provides a good fit for a variable. **scipy** has a neat 'fit' function that can do this for us, given a guess at a distribution. This fit is computed by maximising a log-likelihood function, with a penalty applied for samples outside of range of the distribution.Let's see this in action with an example using synthetic data created from a noisy normal distribution:
###Code
size = 1000
μ, σ = 2, 1.5
# Generate normally dist data
data = prng.normal(loc=μ, scale=σ, size=size)
# Add noise
data = data + prng.uniform(-0.5, 0.5, size)
# Show first 5 entries
data[:5]
# Plot a histogram of the data and fit a normal distribution
sns.distplot(data, bins=60, kde=False, fit=st.norm)
# Get the fitted parameters as computed by scipy
(est_loc, est_scale) = st.norm.fit(data)
plt.legend(
[
f"Est. normal dist. ($\hat{{\mu}}=${est_loc:.2f} and $\hat{{\sigma}}=${est_scale:.2f} )",
"Histogram",
],
loc="upper left",
frameon=False,
)
fig = plt.figure()
res = st.probplot(data, plot=plt)
plt.show();
###Output
_____no_output_____ |
_notebooks/concat reindex dsi.ipynb | ###Markdown
variables
###Code
dsi_data_01 = {
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"data_raw" : {
"f_col_headers" : [
{
"f_coll_header_val" : "_id",
"f_coll_header_text" : "_id"
},
{
"f_coll_header_val" : "added_at",
"f_coll_header_text" : "added_at"
},
{
"f_coll_header_val" : "adresse du projet",
"f_coll_header_text" : "adresse du projet"
},
{
"f_coll_header_val" : "image(s) du projet",
"f_coll_header_text" : "image(s) du projet"
},
{
"f_coll_header_val" : "link_data",
"f_coll_header_text" : "link_data"
},
{
"f_coll_header_val" : "link_src",
"f_coll_header_text" : "link_src"
},
{
"f_coll_header_val" : "partenaires du projet",
"f_coll_header_text" : "partenaires du projet"
},
{
"f_coll_header_val" : "résumé du projet",
"f_coll_header_text" : "résumé du projet"
},
{
"f_coll_header_val" : "spider_id",
"f_coll_header_text" : "spider_id"
},
{
"f_coll_header_val" : "structure porteuse",
"f_coll_header_text" : "structure porteuse"
},
{
"f_coll_header_val" : "tags",
"f_coll_header_text" : "tags"
},
{
"f_coll_header_val" : "titre du projet",
"f_coll_header_text" : "titre du projet"
},
{
"f_coll_header_val" : "website",
"f_coll_header_text" : "website"
}
],
"f_data" : [
{
"_id" : "5bcdc4dc3ba14c6ce4c3426e",
"added_at" : 1540211930.05566,
"adresse du projet" : [
"2 633 276 €",
": Lorraine",
"FEDER",
"2013",
"842 634 €",
"Lorraine",
"FEDER",
"2013"
],
"image(s) du projet" : [
"http://www.europe-en-france.gouv.fr/var/europe_en_france/storage/images/rendez-vous-compte/projets-exemplaires/le-projet-fibrastral-developpe-une-alternative-a-la-fibre-de-verre/301863-1-fre-FR/Le-projet-Fibrastral-developpe-une-alternative-a-la-fibre-de-verre_projet_pictolisting.jpg",
"http://www.europe-en-france.gouv.fr/var/europe_en_france/storage/images/rendez-vous-compte/projets-exemplaires/le-projet-fibrastral-developpe-une-alternative-a-la-fibre-de-verre/301863-1-fre-FR/Le-projet-Fibrastral-developpe-une-alternative-a-la-fibre-de-verre_projet_logo.jpg"
],
"link_data" : "http://www.europe-en-france.gouv.fr/Rendez-vous-compte/Projets-exemplaires/Le-projet-Fibrastral-developpe-une-alternative-a-la-fibre-de-verre/(offset)/10/(viewtheme)/0/(viewregion)/0/(viewfonds)/0/(annee)/0",
"link_src" : "http://www.europe-en-france.gouv.fr/Rendez-vous-compte/Projets-exemplaires/(offset)/10/(annee)/0/(viewtheme)/0/(viewregion)/0/(viewfonds)/0",
"partenaires du projet" : [
"2 633 276 €",
": Lorraine",
"FEDER",
"2013",
"842 634 €",
"Lorraine",
"FEDER",
"2013"
],
"résumé du projet" : [
"La fibre de verre, considérée comme un matériau polluant, est issue du pétrole et entre dans la composition de certaines matières plastiques. Créer une alternative à cette fibre est une idée porteuse et innovante car elle est à la fois écologique, génératrice d’économie d’énergie et créatrice d’emplois. Pour toutes ces raisons, l’Union européenne a choisi d’appuyer ce projet à hauteur de 842 634 euros, sur un budget total de 2,6 millions d’euros."
],
"spider_id" : "5ac3505d0a82860f68d16288",
"structure porteuse" : [
"Thématique(s) : Attractivité du territoire ; Les grands projets européens 2007-2013 ; Services au public ; Transports ; Urbain",
"Thématique(s) : Attractivité du territoire ; Eco-tourisme",
"Thématique(s) : Les grands projets européens 2007-2013 ; TIC",
"Thématique(s) : Attractivité du territoire ; Recherche et Innovation",
"Attractivité du territoire ; Recherche et Innovation",
"Bénéficiaire :",
"Thématique(s) :",
"Université de Lorraine"
],
"tags" : [
"Thématique(s) : Attractivité du territoire ; Recherche et Innovation",
"Attractivité du territoire ; Recherche et Innovation",
"Bénéficiaire :",
"Thématique(s) :",
"Université de Lorraine"
],
"titre du projet" : [
"Le projet Fibrastral développe une alternative à la fibre de verre",
"Le projet Fibrastral développe une alternative à la fibre de verre"
],
"website" : null
},
{
"_id" : "5bd0895e3ba14c3fada0550e",
"added_at" : 1540392039.04563,
"adresse du projet" : [
"Opération \"Eté Jeunes\" pour les 14-16 ans Gers",
"32360 Jegun",
"Les députés votent la substitution du CNDS par l'Agence nationale du sport et gonflent son budget",
"Lancement de France Num, une plateforme nationale pour accompagner la transformation numérique des TPE",
"Grande Rue - BP1",
"Volet \"finances locales\" : le point après le vote par les députés de la première partie du texte",
"Enfin des stats sur le poids de l'habitat privé dans les quartiers prioritaires !"
],
"image(s) du projet" : null,
"link_data" : "https://www.caissedesdepotsdesterritoires.fr/cs/ContentServer?pagename=Territoires/MCExperience/Experience&cid=1245645199070",
"link_src" : "https://www.caissedesdepotsdesterritoires.fr/cs/ContentServer?pagename=Territoires/Page/Base-experiences",
"partenaires du projet" : null,
"résumé du projet" : [
"La communauté de communes Coeur de Gascogne a mis en place des chantiers d'une semaine pour les jeunes, pendant les vacances. Le succès auprès des 14-16 ans et de la population est au rendez-vous. Ces chantiers nécessitent cependant un suivi et une énergie considérables de la part des élus et encadrants bénévoles. Des problèmes de normes administratives et de sécurité persistent."
],
"spider_id" : "5bc4951d3ba14c5c5620146c",
"structure porteuse" : null,
"tags" : [
"SOCIAL - SANTÉ",
"CATÉGORIE : EXPÉRIENCES",
"Social - Santé"
],
"titre du projet" : [
"Opération \"Eté Jeunes\" pour les 14-16 ans"
],
"website" : null
},
{
"_id" : "5bd08f1b3ba14c3fada05cd8",
"added_at" : 1540387431.93892,
"adresse du projet" : [
"A Eaubonne, les jeunes en service civique aident des familles à réduire leurs factures d'énergie (95) Val-d'Oise",
"Hôtel de Ville, 1 rue d'Enghien",
"Les députés votent la substitution du CNDS par l'Agence nationale du sport et gonflent son budget",
"Lancement de France Num, une plateforme nationale pour accompagner la transformation numérique des TPE",
"95600 Eaubonne",
"Volet \"finances locales\" : le point après le vote par les députés de la première partie du texte",
"Enfin des stats sur le poids de l'habitat privé dans les quartiers prioritaires !"
],
"image(s) du projet" : [
"https://www.caissedesdepotsdesterritoires.fr/cs/BlobServer?blobkey=id&blobnocache=false&blobwhere=1250170930698&blobcol=urldata&blobtable=MungoBlobs"
],
"link_data" : "https://www.caissedesdepotsdesterritoires.fr/cs/ContentServer?pagename=Territoires/Experiences/Experiences&cid=1250279591681",
"link_src" : "https://www.caissedesdepotsdesterritoires.fr/cs/ContentServer?pagename=Territoires/Page/Base-experiences",
"partenaires du projet" : null,
"résumé du projet" : [
"Chaque année depuis 2014, huit jeunes sont recrutés en service civique pendant huit mois sur la ville d’Eaubonne pour rompre l’isolement des personnes âgées et aider des familles locataires dans des logements sociaux à baisser leurs consommations d’eau et d’énergie dans une résidence d’habitat social. Le bilan positif encourage la ville à poursuivre."
],
"spider_id" : "5bc4951d3ba14c5c5620146c",
"structure porteuse" : null,
"tags" : [
"CITOYENNETÉ - ASSOCIATIONS - JEUNESSE",
"CATÉGORIE : EXPÉRIENCES",
"Citoyenneté - Associations - Jeunesse"
],
"titre du projet" : [
"A Eaubonne, les jeunes en service civique aident des familles à réduire leurs factures d'énergie (95)"
],
"website" : [
"http://www.eaubonne.fr"
]
},
]
}
}
dsi_data_02 = {
"oid_dsi" : ObjectId("5c3758690a828696f74fcfe5"),
"data_raw" : {
"f_col_headers" : [
{
"f_coll_header_val" : "COG",
"f_coll_header_text" : "COG"
},
{
"f_coll_header_val" : "ACTUAL",
"f_coll_header_text" : "ACTUAL"
},
{
"f_coll_header_val" : "CAPAY",
"f_coll_header_text" : "CAPAY"
},
{
"f_coll_header_val" : "CRPAY",
"f_coll_header_text" : "CRPAY"
},
{
"f_coll_header_val" : "ANI",
"f_coll_header_text" : "ANI"
},
{
"f_coll_header_val" : "LIBCOG",
"f_coll_header_text" : "LIBCOG"
},
{
"f_coll_header_val" : "norm_name",
"f_coll_header_text" : "norm_name"
},
{
"f_coll_header_val" : "LIBENR",
"f_coll_header_text" : "LIBENR"
},
{
"f_coll_header_val" : "ANCNOM",
"f_coll_header_text" : "ANCNOM"
},
{
"f_coll_header_val" : "CODEISO2",
"f_coll_header_text" : "CODEISO2"
},
{
"f_coll_header_val" : "CODEISO3",
"f_coll_header_text" : "CODEISO3"
},
{
"f_coll_header_val" : "CODENUM3",
"f_coll_header_text" : "CODENUM3"
}
],
"f_data" : [
{
"COG" : "99103",
"ACTUAL" : 1,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "NORVEGE",
"norm_name" : "norvege",
"LIBENR" : "ROYAUME DE NORVÈGE",
"ANCNOM" : null,
"CODEISO2" : "NO",
"CODEISO3" : "NOR",
"CODENUM3" : 578.0
},
{
"COG" : "99103",
"ACTUAL" : 3,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "BOUVET (ILE)",
"norm_name" : "bouvet",
"LIBENR" : "BOUVET (ÎLE)",
"ANCNOM" : null,
"CODEISO2" : "BV",
"CODEISO3" : "BVT",
"CODENUM3" : 74.0
},
{
"COG" : "99104",
"ACTUAL" : 1,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "SUEDE",
"norm_name" : "suede",
"LIBENR" : "ROYAUME DE SUÈDE",
"ANCNOM" : null,
"CODEISO2" : "SE",
"CODEISO3" : "SWE",
"CODENUM3" : 752.0
},
{
"COG" : "99105",
"ACTUAL" : 1,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "FINLANDE",
"norm_name" : "finlande",
"LIBENR" : "RÉPUBLIQUE DE FINLANDE",
"ANCNOM" : null,
"CODEISO2" : "FI",
"CODEISO3" : "FIN",
"CODENUM3" : 246.0
},
{
"COG" : "99110",
"ACTUAL" : 1,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "AUTRICHE",
"norm_name" : "autriche",
"LIBENR" : "RÉPUBLIQUE D'AUTRICHE",
"ANCNOM" : null,
"CODEISO2" : "AT",
"CODEISO3" : "AUT",
"CODENUM3" : 40.0
},
{
"COG" : "99112",
"ACTUAL" : 1,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "HONGRIE",
"norm_name" : "hongrie",
"LIBENR" : "RÉPUBLIQUE DE HONGRIE",
"ANCNOM" : null,
"CODEISO2" : "HU",
"CODEISO3" : "HUN",
"CODENUM3" : 348.0
},
{
"COG" : "99115",
"ACTUAL" : 2,
"CAPAY" : null,
"CRPAY" : null,
"ANI" : null,
"LIBCOG" : "TCHECOSLOVAQUIE",
"norm_name" : "tchecoslovaquie",
"LIBENR" : "TCHÉCOSLOVAQUIE",
"ANCNOM" : null,
"CODEISO2" : null,
"CODEISO3" : null,
"CODENUM3" : null
},
]
}
}
dso_col_headers = {
"f_col_headers" : [
{
"oid_dmf" : ObjectId("5c056a4f0a82861e733374ef"),
"open_level_show" : "open_data",
"name" : "Identification du marché public"
},
{
"oid_dmf" : ObjectId("5ba664030a82860745d51fdd"),
"open_level_show" : "open_data",
"name" : "Adresse_"
},
### -------------------------
{
"oid_dmf" : ObjectId("5c10450f0a8286a6f6522e02"),
"open_level_show" : "open_data",
"name" : "Date de la notification du marché"
},
{
"oid_dmf" : ObjectId("5c10456e0a8286a6f6522e04"),
"open_level_show" : "private"
### missing name
},
{
"oid_dmf" : ObjectId("5bfe87520a82866842b7db96"),
"open_level_show" : "collective",
"name" : "Nom de l'acheteur"
},
{
"oid_dmf" : ObjectId("5c10443c0a8286a6f6522dfe"),
"open_level_show" : "open_data",
"name" : "Code du lieu d'exécution"
},
{
"oid_dmf" : ObjectId("5c1044e90a8286a6f6522e01"),
"open_level_show" : "commons",
"name" : "Durée initiale du marché"
}
],
}
dso_mapping = {
"dsi_to_dmf" : [
{
"dsi_header" : "added_at",
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"oid_dmf" : ObjectId("5c3777d10a8286e874b46075") ### not present in f_col_headers
},
{
"dsi_header" : "adresse du projet",
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"oid_dmf" : ObjectId("5ba664030a82860745d51fdd") # OK in f_col_headers
},
{
"dsi_header" : "_id",
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"oid_dmf" : ObjectId("5c056a4f0a82861e733374ef") # OK in f_col_headers
},
{
"dsi_header" : "link_data1",
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"oid_dmf" : ObjectId("5bf4183f0a8286180b53183c") ### not present in f_col_headers
},
{
"dsi_header" : "link_data2",
"oid_dsi" : ObjectId("5c3758200a828696f74fcfe3"),
"oid_dmf" : ObjectId("5bf4183f0a8286180b53183c") ### not present in f_col_headers
},
#### ------------------------------------ ####
{
"dsi_header" : "CAPAY",
"oid_dsi" : ObjectId("5c3758690a828696f74fcfe5"),
"oid_dmf" : ObjectId("5c10443c0a8286a6f6522dfe")
},
{
"dsi_header" : "ACTUAL",
"oid_dsi" : ObjectId("5c3758690a828696f74fcfe5"),
"oid_dmf" : ObjectId("5ba664030a82860745d51fdd")
}
],
"dmf_to_rec" : [],
"dmf_to_open_level" : [
{
"oid_dmf" : ObjectId("5c056a4f0a82861e733374ef"),
"open_level_show" : "open_data"
},
{
"oid_dmf" : ObjectId("5c10450f0a8286a6f6522e02"),
"open_level_show" : "open_data"
},
{
"oid_dmf" : ObjectId("5c10456e0a8286a6f6522e04"),
"open_level_show" : "private"
},
{
"oid_dmf" : ObjectId("5bfe87520a82866842b7db96"),
"open_level_show" : "collective"
},
{
"oid_dmf" : ObjectId("5ba664030a82860745d51fdd"),
"open_level_show" : "open_data"
},
{
"oid_dmf" : ObjectId("5c10443c0a8286a6f6522dfe"),
"open_level_show" : "open_data"
},
{
"oid_dmf" : ObjectId("5c1044e90a8286a6f6522e01"),
"open_level_show" : "commons"
}
]
}
###Output
_____no_output_____
###Markdown
local mappers
###Code
### store f_col_headers apart
headers_dso = dso_col_headers["f_col_headers"]
#pp.pprint(headers_dso)
df_mapper_col_headers = pd.DataFrame(headers_dso)
df_mapper_col_headers = df_mapper_col_headers.set_index("oid_dmf")
df_mapper_col_headers
len(df_mapper_col_headers.index)
df_mapper_col_headers.loc[ObjectId("5c056a4f0a82861e733374ef")]["name"]
# df_mapper_col_headers.to_dict('index')
### store dsi_to_dmf apart
prj_dsi_mapping = dso_mapping["dsi_to_dmf"]
#pp.pprint(prj_dsi_mapping)
df_mapper_dsi_to_dmf = pd.DataFrame(prj_dsi_mapping)
df_mapper_dsi_to_dmf = df_mapper_dsi_to_dmf.set_index(["oid_dsi","oid_dmf"])
df_mapper_dsi_to_dmf
len(df_mapper_dsi_to_dmf.index)
### drop duplicated indices and keep first from multiindex
df_mapper_dsi_to_dmf = df_mapper_dsi_to_dmf[~df_mapper_dsi_to_dmf.index.duplicated(keep="first")]
df_mapper_dsi_to_dmf
len(df_mapper_dsi_to_dmf.index)
# df_mapper_dsi_to_dmf.to_dict('index')
df_mapper_dsi_to_dmf#.reset_index()
### test selecting by oid_dsi
df_oid_mapper = df_mapper_dsi_to_dmf.loc[ObjectId("5c3758200a828696f74fcfe3")]
df_oid_mapper
oid_selection = [ObjectId("5c3777d10a8286e874b46075"), ObjectId("5c056a4f0a82861e733374ef")]
df_oid_selection = df_oid_mapper.loc[df_oid_mapper.index.isin(oid_selection)]
df_oid_selection
list(df_oid_selection["dsi_header"])
def dsi_remap (dsi_data, df_mapper_dsi_to_dmf, df_mapper_col_headers ) :
### add df_mapper_dsi_to_dmf from df_mapper_col_headers
df_oid = dsi_data["oid_dsi"]
df_map_light = df_mapper_dsi_to_dmf.loc[df_oid]
df_map = pd.merge(df_map_light, df_mapper_col_headers, on=['oid_dmf']).reset_index()
df_map_ = df_map.set_index('dsi_header')
### list cols to keep : present in dsi and mapped
# df_cols_to_keep = list(df_map['dsi_header'])
df_cols_to_keep = list(df_map_.index)
### generate df from dsi_data
df_ = pd.DataFrame(dsi_data["data_raw"]["f_data"])
### drop useless columns in df_
df_cols = list(df_.columns)
df_cols_to_drop = [ h for h in df_cols if h not in df_cols_to_keep ]
df_light = df_.drop( df_cols_to_drop, axis=1 )
### rename columns dataframe
remapper_dict = dict(df_map_['name'])
df_light.columns = df_light.columns.to_series().map(remapper_dict)
print()
return df_light
df_01_light = dsi_remap(dsi_data_01, df_mapper_dsi_to_dmf, df_mapper_col_headers)
df_01_light
df_02_light = dsi_remap(dsi_data_02, df_mapper_dsi_to_dmf, df_mapper_col_headers)
df_02_light
df_data_concat = pd.concat([df_01_light, df_02_light], ignore_index=True, sort=False)
df_data_concat = df_data_concat.replace({np.nan:None})
df_data_concat
df_data_concat.to_dict('records')
### add df_mapper_dsi_to_dmf from df_mapper_col_headers
df_01_oid = dsi_data_01["oid_dsi"]
df_map_01 = df_mapper_dsi_to_dmf.loc[df_01_oid]
df_map_01
###
df_map_01_ = pd.merge(df_map_01, df_mapper_col_headers, on=['oid_dmf']).reset_index()
df_map_01_ = df_map_01_.set_index('dsi_header')
df_map_01_
df_01_cols_to_keep = list(df_map_01_.index)
df_01_cols_to_keep
remapper_dict = dict(df_map_01_['name'])
remapper_dict
df_01 = pd.DataFrame(dsi_data_01["data_raw"]["f_data"])
df_01.head(2)
df_01_cols = list(df_01.columns)
df_01_cols_to_drop = [ h for h in df_01_cols if h not in df_01_cols_to_keep ]
df_01_cols_to_drop
df_01_light = df_01.drop( df_01_cols_to_drop, axis=1 )
df_01_light.head(2)
### rename columns
df_01_light.columns = df_01_light.columns.to_series().map(remapper_dict)
df_01_light
df_01_oid = dsi_data_01["oid_dsi"]
df_map_01 = df_mapper_multiindexed.loc[df_01_oid]
df_map_01
df_map_01.to_dict('index')
def dsi_remap (dsi_data, prj_dsi_mapping, df_mapper_multiindexed ) :
### generate df from dsi_data
df_oid = dsi_data["oid_dsi"]
df_ = pd.DataFrame(dsi_data["data_raw"]["f_data"])
print("df_ ...")
print(df_)
### get
df_map_ = df_mapper_multiindexed.loc[df_oid]
### get current dsi's map from prj_dsi_mapping
df_map = [ h for h in prj_dsi_mapping if h["oid_dsi"] == df_oid ]
print()
### drop useless columns
df_cols = list(df_.columns)
df_cols_to_keep = [ h["dsi_header"] for h in df_map if h["dsi_header"] in df_cols ]
df_cols_to_drop = [ h for h in df_cols if h not in df_cols_to_keep ]
df_light = df_.drop( df_cols_to_drop, axis=1 )
print("df_light ...")
print(df_light)
### rename columns dataframe
df_mapper_ = df_mapper_.reset_index().set_index('dsi_header')
df_mapper_dict_name = df_mapper_["name"].to_dict()
print("df_mapper_dict_name : \n", df_mapper_dict_name)
### remap
df_light.columns = df_light.columns.to_series().map(df_mapper_dict_name)
print("df_light ...")
print(df_light)
return df_light
df_01 = pd.DataFrame(dsi_data_01["data_raw"]["f_data"])
df_01
df_01_light_ = dsi_remap(dsi_data_01, prj_dsi_mapping, df_mapper_reindexed)
df_01_light_
### generate df from dsi_01
df_01_oid = dsi_data_01["oid_dsi"]
df_01_map = [ h for h in prj_dsi_mapping if h["oid_dsi"] == df_01_oid ]
print ("df_01_map :")
pp.pprint ( df_01_map)
df_01 = pd.DataFrame(dsi_data_01["data_raw"]["f_data"])
df_01.head(3)
### drop useless columns
df_01_cols = list(df_01.columns)
df_01_cols_to_keep = [ h["dsi_header"] for h in df_01_map if h["dsi_header"] in df_01_cols ]
print("df_01_cols_to_keep :")
pp.pprint(df_01_cols_to_keep)
df_01_cols_to_drop = [ h for h in df_01_cols if h not in df_01_cols_to_keep ]
print("df_01_cols_to_drop :")
pp.pprint(df_01_cols_to_drop)
df_01_light = df_01.drop( df_01_cols_to_drop, axis=1 )
df_01_light
### generate df from dsi_01
df_02_oid = dsi_data_02["oid_dsi"]
df_02_map = [ header for header in prj_dsi_mapping if header["oid_dsi"] == df_02_oid ]
print ("df_02_map :")
pp.pprint ( df_02_map)
df_02 = pd.DataFrame(dsi_data_02["data_raw"]["f_data"])
print ("df_02 : ")
df_02.head(3)
### drop useless columns
df_02_cols = list(df_02.columns)
df_02_cols_to_keep = [ h["dsi_header"] for h in df_02_map if h["dsi_header"] in df_02_cols ]
df_02_cols_to_drop = [ h for h in df_02_cols if h not in df_02_cols_to_keep ]
df_02_cols_to_drop
df_02_light = df_02.drop( df_02_cols_to_drop, axis=1 )
df_02_light
df_mapper_02 = df_mapper_reindexed[df_mapper_reindexed.oid_dsi == df_02_oid ]
df_mapper_02 = df_mapper_02.reset_index().set_index('dsi_header')
df_mapper_02
df_mapper_dict_name = df_mapper_02["name"].to_dict()
print (df_mapper_dict_name)
df_02_light.columns = df_02_light.columns.to_series().map(df_mapper_dict_name)
df_02_light
result = pd.concat([df_01_light, df_02_light], ignore_index=True)
result
###Output
_____no_output_____ |
FashionMNIST Challenge/k-means.ipynb | ###Markdown
作业4:设计并训练K-Means算法对图片进行聚类。
###Code
import numpy as np
import tensorflow as tf
import random
import datetime
#加载TFRecord训练集的数据
reader = tf.TFRecordReader()
filename_queue = tf.train.string_input_producer(["/home/srhyme/ML project/DS/train.tfrecords"])
_, example = reader.read(filename_queue)
features = tf.parse_single_example(
example,features={
'image_raw': tf.FixedLenFeature([], tf.string),
'pixels': tf.FixedLenFeature([], tf.int64),
'label': tf.FixedLenFeature([], tf.int64),
})
train_images = tf.decode_raw(features['image_raw'], tf.uint8)
train_labels = tf.cast(features['label'], tf.int32)
train_pixels = tf.cast(features['pixels'], tf.int32)
#生成用于聚类的数据
data=[]
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
data_num = sess.run(train_pixels)
for i in range(data_num):
train_image=sess.run(train_images)
data.append(train_image)
sess.close()
data=np.array(data)
#距离公式
def Distance (x,y):
return np.linalg.norm(x-y)/10000000
#随机生成10个质心的索引
centroids_index=[]
while len(centroids_index) != 10:
random_centroid=random.randint(0,data_num-1)
if random_centroid not in centroids_index:
centroids_index.append(random_centroid)
#根据索引生成对应的质心矩阵
centroids = np.empty((10,784))
for i in range(10):
centroids[i]=data[centroids_index[i]]
#生成55000*2的矩阵,第一列存储样本点所属的族的索引值,第二列存储该点与所属族的质心的距离
cluster = np.empty((data_num,2))
cluster_times=0
begin = datetime.datetime.now()
while True:
change_num=0
#遍历每一个数据并初始化每个数据的距离和类别
for i in range(data_num):
min_distance = np.float64('inf')
min_index=-1
for j in range(10):
distance=Distance(centroids[j],data[i])
if distance < min_distance:
min_distance = distance
min_index = j
#修正数据的类别和距离
if cluster[i,0] != min_index:
cluster[i] = min_index,min_distance
change_num+=1
#设定容错率为0,直到全部聚类完成才会跳出while循环
if change_num == 0:
break
#更改质心
for i in range(10):
allidata = data[np.nonzero(cluster[:,0]==i)[0]]
centroids[i] = np.mean(allidata, axis=0)
cluster_times+=1
print('已完成%d次聚类,此次共更改%d个数据' % (cluster_times,change_num))
end = datetime.datetime.now()
print('完成聚类任务,共聚类%d次,耗时'%(cluster_times),end-begin)
###Output
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled
[[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer, input_producer/RandomShuffle)]]
已完成1次聚类,此次共更改54956个数据
已完成2次聚类,此次共更改9886个数据
已完成3次聚类,此次共更改5397个数据
已完成4次聚类,此次共更改3812个数据
已完成5次聚类,此次共更改2740个数据
已完成6次聚类,此次共更改1559个数据
已完成7次聚类,此次共更改854个数据
已完成8次聚类,此次共更改525个数据
已完成9次聚类,此次共更改346个数据
已完成10次聚类,此次共更改200个数据
已完成11次聚类,此次共更改124个数据
已完成12次聚类,此次共更改70个数据
已完成13次聚类,此次共更改39个数据
已完成14次聚类,此次共更改20个数据
已完成15次聚类,此次共更改10个数据
已完成16次聚类,此次共更改9个数据
已完成17次聚类,此次共更改8个数据
已完成18次聚类,此次共更改7个数据
已完成19次聚类,此次共更改7个数据
已完成20次聚类,此次共更改1个数据
完成聚类任务,共聚类20次,耗时 0:01:35.842670
|
notebooks/.ipynb_checkpoints/geomapping-checkpoint.ipynb | ###Markdown
###Code
from ast import literal_eval
###Output
_____no_output_____
###Markdown
###Code
import sys
import os
import os.path as path
import matplotlib.pyplot as mplplt
%matplotlib inline
import pickle as pkl
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
###Code
def merc(coords):
Coordinates = literal_eval(coords)
lat = Coordinates[0]
lon = Coordinates[1]
r_major = 6378137.000
x = r_major * math.radians(lon)
scale = (x/lon)
y = (180.0/math.pi) * ( math.log(math.tan(math.pi/4.0 + lat * (math.pi/180.0)/2.0)) ) * scale
return (x, y)
#supporting function
def make_tuple_str(x, y):
t = (x, y)
return str(t)
# Read with pandas
datafile = pd.read_csv('../datasets/time_series-ncov-Confirmed.csv')
datafile
dataframe_coords_by_date = datafile[['Country/Region', 'Province/State', 'Lat', 'Long', 'Date', 'Value']]
dataframe_coords_by_date
dataframe_coords_1st_week_january = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-01-01') & (dataframe_coords_by_date['Date'] <= '2020-01-07')]
dataframe_coords_1st_week_january
dataframe_coords_2nd_week_january = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-01-08') & (dataframe_coords_by_date['Date'] <= '2020-01-14')]
dataframe_coords_2nd_week_january
dataframe_coords_3rd_week_january = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-01-15') & (dataframe_coords_by_date['Date'] <= '2020-01-21')]
dataframe_coords_3rd_week_january
dataframe_coords_4th_week_january = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-01-22') & (dataframe_coords_by_date['Date'] <= '2020-01-28')]
dataframe_coords_4th_week_january
dataframe_coords_4th_week_january = dataframe_coords_4th_week_january[dataframe_coords_4th_week_january['Value'] > '0']
dataframe_coords_4th_week_january
dataframe_coords_4th_week_january.groupby(['Lat', 'Long'], sort=True)['Country/Region', 'Province/State', 'Value'].max()
dataframe_coords_4th_week_january_index = dataframe_coords_4th_week_january.groupby(['Lat', 'Long'], sort=False)['Value'].transform(max) == dataframe_coords_4th_week_january['Value']
dataframe_coords_4th_week_january = dataframe_coords_4th_week_january[dataframe_coords_4th_week_january_index]
dataframe_coords_4th_week_january_index = dataframe_coords_4th_week_january.groupby(['Lat', 'Long'], sort=False)['Date'].transform(max) == dataframe_coords_4th_week_january['Date']
dataframe_coords_4th_week_january = dataframe_coords_4th_week_january[dataframe_coords_4th_week_january_index]
dataframe_coords_4th_week_january
def wgs84_to_web_mercator(dataframe, lon="Long", lat="Lat"):
k = 6378137
dataframe["x"] = dataframe[lon] * (k * np.pi/180.0)
dataframe["y"] = np.log(np.tan((90 + dataframe[lat]) * np.pi/360.0)) * k
return dataframe
dataframe_coords_4th_week_january = wgs84_to_web_mercator(dataframe_coords_4th_week_january)
dataframe_coords_4th_week_january_source = ColumnDataSource(data = dataframe_coords_4th_week_january)
dataframe_coords_4th_week_january_source
output_file("covid_19_virus_confirmed_geomapping_4th_week_january.html")
tile_provider = get_provider(Vendors.CARTODBPOSITRON)
# Range bounds supplied in web mercator coordinates
geo_plot = figure(x_range=(-9000000, 9000000), y_range=(-9000000, 9000000),
sizing_mode='stretch_both')
geo_plot.circle(x='Long', y='Lat', source=dataframe_coords_4th_week_january_source, line_color='yellow', fill_color='red')
geo_plot.axis.visible = False
geo_plot.add_tile(tile_provider)
show(geo_plot)
dataframe_coords_5th_week_january = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-01-29') & (dataframe_coords_by_date['Date'] <= '2020-02-04')]
dataframe_coords_5th_week_january
dataframe_coords_1st_week_february = dataframe_coords_by_date[ (dataframe_coords_by_date['Date'] > '2020-02-01') & (dataframe_coords_by_date['Date'] <= '2020-02-07')]
###Output
_____no_output_____
###Markdown
###Code
datafile['coords'] = datafile.apply(lambda x: make_tuple_str(x['Lat'], x['Long']), axis = 1)
datafile['coords_latitude'] = datafile['coords'].apply(lambda x: merc(x)[0])
datafile['coords_longitude'] = datafile['coords'].apply(lambda x: merc(x)[1])
###Output
_____no_output_____
###Markdown
###Code
output_file("covid_19_virus_confirmed_geomapping.html")
tile_provider = get_provider(Vendors.CARTODBPOSITRON)
# Range bounds supplied in web mercator coordinates
geo_plot = figure(x_range=(-9000000, 9000000), y_range=(-9000000, 9000000),
sizing_mode='stretch_both',
x_axis_type="mercator", y_axis_type="mercator")
geo_plot.add_tile(tile_provider)
show(geo_plot)
###Output
_____no_output_____ |
data_exploration/Data Exploration_Category_1.ipynb | ###Markdown
The purpose of this notebook is to perform data exploration of stock price change AnalysisCategories: Increase in preceding week, increase in first trading day after announcement Increase in preceding week, decrease in first trading day after announcement Decrease in preceding week, increase in first trading day after announcement Decrease in preceding week, decrease in first trading day after announcementDepdent Variables:t+2: price change from day t+1 close to day t+2 closeIndepdent Variables: t-7:Price change in the week preceding the announcement t0:Price change from the closing price immediately preceding the announcement to the open price immediately after the announcement t+1:Price change from the open on day t+1 to the close on day t+1 Import libraries
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import train_test_split
import seaborn as sns
import statsmodels.formula.api as smf
from sklearn import metrics
import matplotlib.pyplot as plt
# allow plots to appear directly in the notebook
%matplotlib inline
pd.set_option('display.max_columns', 50)
###Output
_____no_output_____
###Markdown
Import dataset
###Code
fileName = 'data/trade_dates_data.xlsx'
sheetName = 'pct_chng_before_mrkt'
USECOLS = [
'ticker',
'company',
'earnings_flag',
'earnings_ann_date',
'ann_price_open',
'ann_price_close',
'week preceeding date',
'week_price_open',
'week_price_close',
'pct_chng_t-7',
'week_proceed_code',
'day preceeding date',
'day_b4_price_open',
'day_b4_price_close',
'1 trade day post',
'day_after_price_open',
'day_after_price_close',
'close t+1 diff open t+1',
'day_after_code',
'pct_chng_t0',
'pct_chng_t+1',
'2 trade days post',
'2_days_after_price_open',
'2_days_after_price_close',
'pct_chng_t+2',
'category',
'meets_threshold',
]
b4_data = pd.read_excel(fileName,sheet_name = sheetName, usecols=USECOLS, parse_dates=True)
###Output
_____no_output_____
###Markdown
View data
###Code
b4_data.head()
sheetName_2 = 'pct_chng_after_mrkt'
after_data = pd.read_excel(fileName,sheet_name = sheetName_2, usecols=USECOLS, parse_dates=True)
###Output
_____no_output_____
###Markdown
View data
###Code
after_data.head()
###Output
_____no_output_____
###Markdown
Combine datasets
###Code
full_df = b4_data.append(after_data)
###Output
_____no_output_____
###Markdown
View dataset
###Code
full_df.head(3)
###Output
_____no_output_____
###Markdown
Calucation Explanations Week Change: pct_chng_t-7 Calculations = (announcement day close price - week proceeding close price) / (announcement day close price)
###Code
week_prior_pct_change = list((full_df['ann_price_close'] - full_df['week_price_close'] )/full_df['ann_price_close'])
week_prior_pct_change[:5]
###Output
_____no_output_____
###Markdown
Verify the pct_chng_t-7 column matches the expected calulcations above
###Code
stock_data = full_df['pct_chng_t-7'].tolist()
for i in range(len(stock_data)):
if stock_data[i] != week_prior_pct_change[i]:
print(f'Stock Data: {stock_data[i]} week data: {week_prior_pct_change[i]}')
###Output
Stock Data: nan week data: nan
Stock Data: nan week data: nan
Stock Data: nan week data: nan
Stock Data: nan week data: nan
Stock Data: nan week data: nan
Stock Data: nan week data: nan
Stock Data: nan week data: nan
###Markdown
Filter data for analysis columns only
###Code
analysis_cols = [
'ticker',
'company',
'earnings_flag',
'pct_chng_t-7',
'pct_chng_t0',
'pct_chng_t+1',
'pct_chng_t+2',
'category',
'meets_threshold',
]
b4_analysis = b4_data[analysis_cols].copy()
after_analysis = after_data[analysis_cols].copy()
###Output
_____no_output_____
###Markdown
View data
###Code
b4_analysis.head()
after_analysis.head()
###Output
_____no_output_____
###Markdown
Append datasets
###Code
full_data = b4_analysis.append(after_analysis)
###Output
_____no_output_____
###Markdown
View data
###Code
full_data.head()
full_data.tail()
###Output
_____no_output_____
###Markdown
Get descriptions and information about dataset
###Code
full_data.info()
full_data.describe()
###Output
_____no_output_____
###Markdown
Category 1 Analysis Filter data for: Meets pct change thershold is true Category 1: Increase in Increase in preceding week, increase in first trading day after announcement
###Code
curr_cat = 1
cat_1_data = full_data[(full_data['category']==curr_cat) & (full_data['meets_threshold']==1)]
###Output
_____no_output_____
###Markdown
View Category 1 data
###Code
cat_1_data.head()
cat_1_data.shape
IMP_COLS = [
'ticker',
'pct_chng_t-7',
'pct_chng_t0',
'pct_chng_t+1',
'pct_chng_t+2',
]
cat_1_df = cat_1_data[IMP_COLS].copy()
cat_1_df.head()
###Output
_____no_output_____
###Markdown
Plot visualizations of dataset Analysis:
###Code
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(cat_1_df, x_vars=['pct_chng_t-7','pct_chng_t0','pct_chng_t+1'], y_vars = 'pct_chng_t+2',height=7,aspect=0.7)
###Output
_____no_output_____
###Markdown
Explore the correlation coefficients Correlation Analysis: Category 1 Observation:no strong corrlelation b/t t-7 and t+2 Observation:no strong corrlelation b/t t0 and t+2 Highest correlation score Observation:no strong corrlelation b/t t+1 and t+2.
###Code
# Compute the correlation matrix
corr = cat_1_df.corr()
corr
fig, ax = plt.subplots(figsize=(14,9))
sns.heatmap(corr, annot=True, linewidths=.5,square=True, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Train Machine Learning Models Drop 'ticker' colmn to perform ML
###Code
ml_data = cat_1_df.drop('ticker',axis=1)
ml_data.head()
###Output
_____no_output_____
###Markdown
Split the data
###Code
X = ml_data.drop('pct_chng_t+2',axis=1)
y = ml_data[['pct_chng_t+2']]
from sklearn.model_selection import train_test_split
# Split X and y into X_
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)
###Output
_____no_output_____
###Markdown
1. Linear Regression
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Init and Fit model
###Code
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
View Model coefficients
###Code
for idx, col_name in enumerate(X_train.columns):
print("The coefficient for {} is {}".format(col_name, regression_model.coef_[0][idx]))
###Output
The coefficient for pct_chng_t-7 is 0.0373419572150715
The coefficient for pct_chng_t0 is -0.18610297949516952
The coefficient for pct_chng_t+1 is 0.04557248164814025
###Markdown
View Model Intercept
###Code
intercept = regression_model.intercept_[0]
print("The intercept for our model is {}".format(intercept))
###Output
The intercept for our model is 0.0009502620742763319
###Markdown
Score model using R squared
###Code
regression_model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Note that it is possible to get a negative R-square for equations that do not contain a constant term. Because R-square is defined as the proportion of variance explained by the fit, if the fit is actually worse than just fitting a horizontal line then R-square is negative. In this case, R-square cannot be interpreted as the square of a correlation. Such situations indicate that a constant term should be added to the model. Make predictions and get mse
###Code
from sklearn.metrics import mean_squared_error
import math
y_predict = regression_model.predict(X_test)
regression_model_mse = mean_squared_error(y_predict, y_test)
regression_model_mse
math.sqrt(regression_model_mse)
###Output
_____no_output_____ |
_notebooks/2020-04-18-Zip-and-Unzip-in-Python.ipynb | ###Markdown
"Zip and Unzip in Python"> "A quick demo showing how to zip/unzip files in a Jupyter notebook using Python."- toc: false- branch: master- badges: true- comments: true- author: David Cato- categories: [jupyter, python, quick-demo] Overview:1. select files in directory2. select a sample of files3. write sampled filepaths into csv4. __zip__ sampled files5. __unzip__ sampled files6. read sampled filepaths from csv
###Code
from pathlib import Path
from fastai.vision import get_image_files
import numpy as np
from zipfile import ZipFile
# working directory
path = Path('/home/dc/coronahack/source/nih-chest-xrays')
# source directory containing files to zip
src_dir = path / 'data'
# csv filepath (to be created/overwritten)
csv_dst = path / 'nih-chest-xrays_sample-2000.csv'
# zip filepath (to be created/overwritten)
zip_dst = path / 'nih-chest-xrays_sample-2000.zip'
# unzip directory (to be created/overwritten)
unzip_dst = path / 'sample-2000'
###Output
_____no_output_____
###Markdown
Create Zip 1. Select files in specified directory(e.g all image files in dir + subdirs)
###Code
files = sorted(get_image_files(src_dir, recurse=True))
len(files), files[:5]
###Output
_____no_output_____
###Markdown
2. Randomly sample `n` files from list(optional: set seed)
###Code
n = 2000
seed = np.random.randint(0, 2**32-1)
# seed = 0
np.random.seed(seed)
sample_paths = np.random.choice(files, n, replace=False)
sample_paths
###Output
_____no_output_____
###Markdown
3. Write csv of original file paths into `csv_dst` file
###Code
csv_dst.exists(), csv_dst
np.savetxt(csv_dst, sample_paths.astype(np.str), fmt='%s', delimiter=',')
###Output
_____no_output_____
###Markdown
4. Zip files in list into `zip_dst` file
###Code
zip_dst.exists(), zip_dst
with ZipFile(zip_dst,'w') as zf:
for fn in sample_paths:
zf.write(fn)
###Output
_____no_output_____
###Markdown
Unzip files 5. Unzip files into `unzip_dst` folder
###Code
unzip_dst.mkdir(parents=True, exist_ok=True)
unzip_dst.exists(), unzip_dst
with ZipFile(zip_dst, 'r') as zf:
# zf.printdir() # print zip contents
zf.extractall(unzip_dst)
###Output
_____no_output_____
###Markdown
6. Load csv of original file paths
###Code
csv_dst.exists(), csv_dst
np.loadtxt(csv_dst, dtype=np.str, delimiter=',')
###Output
_____no_output_____ |
docs/notebooks/sequency_quickstart.ipynb | ###Markdown
Sequency Quick Start [Sequency](https://mathworld.wolfram.com/Sequency.html) Analysis
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from hotstepper import Steps,Step
import hotstepper as hs
from hotstepper import Sequency
###Output
_____no_output_____
###Markdown
An exciting new sub-module of HotStepper is called Sequency. While that doesn't sound like much, the module will continue to be developed as refinements to the code and further use cases are shown or explained.In short, Sequency analysis is the step function equivalent of Fourier Analysis, instead of using Sine and Cosine functions, the decomposition basis are [Walsh Functions](https://en.wikipedia.org/wiki/Walsh_function). As a very quick demonstration of what sequency analysis can do and how this is similar to Fourier analysis, this notebook works some analysis and investigation use cases to provide some guidance.Also, in the interest of completeness, the Sequency module has also implemented a very basic Fast Fourier transform method.
###Code
seq = Sequency()
n,w = seq.walsh_matrix(8)
###Output
_____no_output_____
###Markdown
Walsh Functions As an example of what the binary step versions of the orthgonal Walsh basis functions look like, we plot the first 16 of them below. They in a crude way represent the step function equivalents of Sine and Cosines that can be used to decompose a periodic signal using Fourier analysis. Step functions with repeating patterns can equally be decomposed into Walsh basis functions. Instead of each function having a frequency associated with it, Walsh functions use a Sequency number that represents the number of zero crossings over the range for that function. The Walsh functions below are plotted in order of increasing sequency number.
###Code
fig,ax = plt.subplots(nrows=8,figsize=(16,22))
for i in range(8):
ax[i].step(n,w[i],label = 'Walsh Sequency {}'.format(i));
ax[i].legend()
###Output
_____no_output_____
###Markdown
Now, let's say we have a step function and wish to understand the types of step changes that are most common to least common within that function, we can use a number of the existing analysis tools in HotStepper, however a more powerful and general method, especially familiar to anyone who has performed any Fourier analysis to answer similar questions with continuous time series is to use the Sequency sub module of HotStepper and perform a Walsh Sequency Spectrum analysis.
###Code
#Create a sequence of steps that we can analyse
steps_changes = np.random.randint(-10,12,100)
rand_steps = Steps().add_direct(data_start=list(range(1,len(steps_changes))),data_weight = steps_changes)
rand_steps.plot()
###Output
_____no_output_____
###Markdown
Sequency Spectrum We create a dedicated sequency analysis object so, if we wanted to resuse it or create another with different parameters, we can keep everything seperated and organised. As tis sub-module matures, this will be the case, for now there isn't a strong immediate need for this seperation.
###Code
fig, (ax,ax2) = plt.subplots(nrows=2,figsize=(16,10))
# Sequency object to use for analysis
rand_step_seq = Sequency()
n,sp,l = rand_step_seq.sequency_spectrum(rand_steps.step_changes())
ax.set_title('Random Steps')
ax.set_xlabel('Step Keys')
ax.set_ylabel('Value')
rand_steps.plot(ax=ax)
ax2.set_title('Step Change Sequency Spectrum')
ax2.set_xlabel('Sequency Number')
ax2.set_ylabel('Amplitude')
ax2.stem(n,sp)
###Output
_____no_output_____
###Markdown
ok, great, what does that actually mean? Sequency analysis isn't mainstream, so I'm not surprised if you don't understand straight away, as it seems odd at first. Ok, let's look at say the top 5 sequencies that are present within the analysis steps data.
###Code
top5_indexes = np.argsort(sp)
top5_sequencies = n[top5_indexes][-5:]
top5_weight = sp[top5_indexes][-5:]
n,w = seq.walsh_matrix(l)
fig,ax = plt.subplots(nrows=5,figsize=(16,16))
i = 0
top5_walsh = np.zeros(l)
for seqy in top5_sequencies:
top5_walsh += np.multiply(w[seqy],top5_weight[i])
ax[i].step(n,w[seqy],label = 'Walsh Component {}'.format(seqy))
ax[i].set_xlabel('Sequency Number')
ax[i].set_ylabel('Amplitude')
ax[i].legend();
i +=1
###Output
_____no_output_____
###Markdown
Walsh Denoising Another typical use case of Fourier transforms is to remove high frequency noise from a single by decomposing it into constituent frequency components and setting frequencies below a threshold value to zero. With Sequency analysis, we have a similar functionality, except since we have step data and we are wanting to retain the nature of the step data (as steps) instead of just smoothing away the details, we can use the denoise method of the Sequency module to remove the higher sequency number components. An explicit example is shown below, here for now, we pass in the direct steps data for denoising, apply a strength parameter to determine how many of the high sequency components are set to zero and then we construct a new Steps object.
###Code
denoise_step_changes_strong = seq.denoise(rand_steps.step_changes(),denoise_strength=0.7,denoise_mode='range')
denoise_step_changes = seq.denoise(rand_steps.step_changes(),denoise_strength=0.2)
rand_steps_denoise_strong = Steps().add_direct(data_start=rand_steps.step_keys(),data_weight = denoise_step_changes_strong)
rand_steps_denoise = Steps().add_direct(data_start=rand_steps.step_keys(),data_weight = denoise_step_changes)
ax = rand_steps.plot(label='Original Data')
rand_steps_denoise.plot(ax=ax,color='g',linewidth=2,label='Light Denoise');
rand_steps_denoise_strong.plot(ax=ax,color='r',linewidth=2,linestyle='-',label='Strong Denoise')
ax.legend();
###Output
_____no_output_____
###Markdown
As another quick example, we can apply the same technique to one of the HotStepper sample datasets, for this tutorial, we'll only look at the first two months of data in order to better highlight what the sequency denoising can do to help better show the trend or typical step behaviour of the data.
###Code
df_vq_samp = pd.read_csv(r'https://raw.githubusercontent.com/TangleSpace/hotstepper-data/master/data/vessel_queue.csv',parse_dates=['enter','leave'],dayfirst=True)
vq_samp = Steps.read_dataframe(df_vq_samp,start='enter',end='leave')
vq_clip = vq_samp.clip(ubound=pd.Timestamp(2020,3,1))
dn = seq.denoise(vq_clip.step_changes(),denoise_mode='range')
vq_clip_dn = Steps().add_direct(data_start=vq_clip.step_keys(convert_keys=True),data_weight=dn)
ax = vq_clip.plot(label='Original Data')
vq_clip_dn.plot(ax=ax,linewidth=3,label='Sequency Denoise')
ax.legend();
###Output
_____no_output_____
###Markdown
As the last item, we can take a look at the sequency spectrum for the vessel queue data. This dataset has a large number of changes and therefore the sequency range goes quite high, however it does show a number of peaks that are significantly larger than the others, indicating a number of distinct and repeating step change patterns within the data over this period.
###Code
fig, (ax,ax2,ax3) = plt.subplots(nrows=3,figsize=(12,8))
fig.tight_layout(h_pad=4)
# Sequency object to use for analysis
vq_clip_seq = Sequency()
n,sp,l = vq_clip_seq.sequency_spectrum(vq_clip.step_changes())
ax.set_title('Step Change Sequency Spectrum')
ax.set_xlabel('Sequency Number')
ax.set_ylabel('Amplitude')
ax.stem(n,sp)
# number of data points, needed to create the sampling frequency
no_points = vq_clip.step_changes().shape[0]
fr,fsp = vq_clip_seq.frequency_spectrum(vq_clip.step_changes(),2*np.pi*no_points)
ax2.set_title('Step Change Frequency Spectrum')
ax2.set_xlabel('Frequency')
ax2.set_ylabel('Amplitude')
ax2.stem(fr,fsp)
# FFT the steps values instead of the delta changes to see the difference in the spectrum.
frv,fspv = vq_clip_seq.frequency_spectrum(vq_clip.step_values(),2*np.pi*no_points)
ax3.set_title('Steps Value Frequency Spectrum')
ax3.set_xlabel('Frequency')
ax3.set_ylabel('Amplitude')
ax3.stem(frv[1:],fspv[1:]);
###Output
_____no_output_____ |
labs/trees/partitioning-feature-space.ipynb | ###Markdown
Partitioning feature space
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.shadow import ShadowDecTree
def show_mse(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
node2samples = ShadowDecTree.node_samples(t, X)
isleaf = ShadowDecTree.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
for node in node2samples:
if isleaf[node]:
leafy = y.iloc[node2samples[node]]
my = np.mean(leafy)
mse = mean_squared_error(leafy, [my]*len(leafy))
print(f"Node {node:3d} has {n_node_samples[node]:3d} samples with MSE ={mse:6.2f}")
###Output
_____no_output_____
###Markdown
**Make sure to get latest dtreeviz**`pip install -U dtreeviz` Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements. Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse(X[['ENG']], y, max_depth=2)
show_mse(X[['ENG']], y, max_depth=4)
###Output
Root 0 has 392 samples with MSE = 60.76
-----------------------------------------
Node 27 has 47 samples with MSE = 8.26
Node 26 has 22 samples with MSE = 6.03
Node 30 has 9 samples with MSE = 1.51
Node 29 has 20 samples with MSE = 3.81
Node 11 has 68 samples with MSE = 20.26
Node 19 has 44 samples with MSE = 2.91
Node 8 has 65 samples with MSE = 20.59
Node 20 has 25 samples with MSE = 4.35
Node 7 has 51 samples with MSE = 29.27
Node 5 has 3 samples with MSE = 6.18
Node 14 has 13 samples with MSE = 23.93
Node 4 has 1 samples with MSE = 0.00
Node 22 has 2 samples with MSE = 81.00
Node 12 has 16 samples with MSE = 9.32
Node 23 has 1 samples with MSE = 0.00
Node 15 has 5 samples with MSE = 3.21
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$I_G = \sum_{i=1}^k \sum_{j \neq i}^k P_j = 1 - \sum_{i=1}^{k} P_i^2$$where $k$ is the number of classes and $P_i$ is the probability of seeing class $i$ in our target $y$ (it's the ratio of $\frac{|y[y==i]|}{|y|}$).
###Code
def gini(x):
"See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
_, counts = np.unique(x, return_counts=True)
n = len(x)
return 1 - np.sum( (counts / n)**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____
###Markdown
Partitioning feature space
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
###Output
_____no_output_____
###Markdown
**Make sure to get latest dtreeviz**`pip install -U dtreeviz` Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements. Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
###Output
Root 0 has 392 samples with MSE = 99.90
-----------------------------------------
Node 4 has 1 samples with MSE = 0.00
Node 5 has 3 samples with MSE = 6.18
Node 7 has 51 samples with MSE = 29.27
Node 8 has 65 samples with MSE = 20.59
Node 11 has 68 samples with MSE = 20.26
Node 12 has 16 samples with MSE = 9.32
Node 14 has 13 samples with MSE = 23.93
Node 15 has 5 samples with MSE = 3.21
Node 19 has 44 samples with MSE = 2.91
Node 20 has 25 samples with MSE = 4.35
Node 22 has 2 samples with MSE = 81.00
Node 23 has 1 samples with MSE = 0.00
Node 26 has 22 samples with MSE = 6.03
Node 27 has 47 samples with MSE = 8.26
Node 29 has 20 samples with MSE = 3.81
Node 30 has 9 samples with MSE = 1.51
Average MSE per record is 14.6
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2$$where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
###Code
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____
###Markdown
Partitioning feature space **Make sure to get latest dtreeviz**
###Code
! pip install -q -U dtreeviz
! pip install -q graphviz==0.17 # 0.18 deletes the `run` func I need
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
###Output
_____no_output_____
###Markdown
Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements.
###Code
mean_squared_error(y,np.repeat(y.mean(),len(y)))
###Output
_____no_output_____
###Markdown
Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
###Output
Root 0 has 392 samples with MSE = 99.90
-----------------------------------------
Node 4 has 1 samples with MSE = 0.00
Node 5 has 3 samples with MSE = 6.18
Node 7 has 51 samples with MSE = 29.27
Node 8 has 65 samples with MSE = 20.59
Node 11 has 68 samples with MSE = 20.26
Node 12 has 16 samples with MSE = 9.32
Node 14 has 13 samples with MSE = 23.93
Node 15 has 5 samples with MSE = 3.21
Node 19 has 44 samples with MSE = 2.91
Node 20 has 25 samples with MSE = 4.35
Node 22 has 2 samples with MSE = 81.00
Node 23 has 1 samples with MSE = 0.00
Node 26 has 22 samples with MSE = 6.03
Node 27 has 47 samples with MSE = 8.26
Node 29 has 20 samples with MSE = 3.81
Node 30 has 9 samples with MSE = 1.51
Average MSE per record is 14.6
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2$$where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
###Code
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____
###Markdown
Partitioning feature space
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
###Output
_____no_output_____
###Markdown
**Make sure to get latest dtreeviz**`pip install -U dtreeviz` Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements. Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
###Output
_____no_output_____
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2$$where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
###Code
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____
###Markdown
Partitioning feature space
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
###Output
_____no_output_____
###Markdown
**Make sure to get latest dtreeviz**`pip install -U dtreeviz` Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements. Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
###Output
_____no_output_____
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$I_G = \sum_{i=1}^k \sum_{j \neq i}^k P_j = 1 - \sum_{i=1}^{k} P_i^2$$where $k$ is the number of classes and $P_i$ is the probability of seeing class $i$ in our target $y$ (it's the ratio of $\frac{|y[y==i]|}{|y|}$).
###Code
def gini(x):
"See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
_, counts = np.unique(x, return_counts=True)
n = len(x)
return 1 - np.sum( (counts / n)**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____
###Markdown
Partitioning feature space **Make sure to get latest dtreeviz**
###Code
! pip install -q -U dtreeviz
! pip install -q graphviz==0.17 # 0.18 deletes the `run` func I need
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
###Output
_____no_output_____
###Markdown
Regression
###Code
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
###Output
_____no_output_____
###Markdown
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements. Solutionmean_squared_error(y, [np.mean(y)]*len(y)) about 60.76 **Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
###Code
split = ...
###Output
_____no_output_____
###Markdown
SolutionThe split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average. **Alter the rtreeviz_univar() call to show the split with arg show={'splits'}** Solutionrtreeviz_univar(dt, X, y, feature_names='Horsepower', markersize=5, mean_linewidth=1, target_name='MPG', fontsize=9, show={'splits'}) **Q.** What are the MSE values for the left, right partitions?Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
###Code
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
###Output
_____no_output_____
###Markdown
Solution Should be (35.68916307096633, 12.770261374699789)lefty = y[X['ENG']<split]righty = y[X['ENG']>=split]mleft = np.mean(lefty)mright = np.mean(righty)mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))mse_right = mean_squared_error(righty, [mright]\*len(righty)) **Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)? SolutionAfter the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split. **Q.** Set the split value to 100 and recompare MSE values for y, left, and right. SolutionWith split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200. Effect of deeper trees Consider the sequence of tree depths 1..6 for horsepower vs MPG.
###Code
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear? SolutionWith depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality. **Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
###Code
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
###Output
Root 0 has 392 samples with MSE = 99.90
-----------------------------------------
Node 4 has 1 samples with MSE = 0.00
Node 5 has 3 samples with MSE = 6.18
Node 7 has 51 samples with MSE = 29.27
Node 8 has 65 samples with MSE = 20.59
Node 11 has 68 samples with MSE = 20.26
Node 12 has 16 samples with MSE = 9.32
Node 14 has 13 samples with MSE = 23.93
Node 15 has 5 samples with MSE = 3.21
Node 19 has 44 samples with MSE = 2.91
Node 20 has 25 samples with MSE = 4.35
Node 22 has 2 samples with MSE = 81.00
Node 23 has 1 samples with MSE = 0.00
Node 26 has 22 samples with MSE = 6.03
Node 27 has 47 samples with MSE = 8.26
Node 29 has 20 samples with MSE = 3.81
Node 30 has 9 samples with MSE = 1.51
Average MSE per record is 14.6
###Markdown
SolutionThe average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens. Consider the plot of the CYL feature (num cylinders) vs MPG:
###Code
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Explain why the graph looks like a bunch of vertical bars. SolutionThe x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data. **Q.** Why don't we get many more splits for depth 10 vs depth 2? SolutionOnce each unique x value has a "bin", there are no more splits to do. **Q.** Why are the orange predictions bars at the levels they are in the plot? SolutionDecision tree leaves predict the average y for all samples in a leaf. Classification
###Code
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
###Output
_____no_output_____
###Markdown
1 variable
###Code
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Q.** Where would you split this (vertically) if you could only split once? SolutionThe split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples. **Alter the code to show the split with arg show={'splits'}** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=1)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() **Q.** For max_depth=2, how many splits will we get? Solution3. We get one split for root and then with depth=2, we have 2 children that each get a split. **Q.** Where would you split this graph in that many places? SolutionOnce we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0) **Alter the code to show max_depth=2** SolutionX = df_wine[['flavanoids']].valuesy = wine.targetdt = DecisionTreeClassifier(max_depth=2)dt.fit(X, y)fig, ax = plt.subplots(1,1, figsize=(4,1.8))ct = ctreeviz_univar(dt, X, y, feature_names = 'flavanoids', class_names=class_names, target_name='Wine', nbins=40, gtype='strip', fontsize=9, show={'splits'}, colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1}, ax=ax)plt.show() Gini impurity Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:$$Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2$$where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
###Code
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
###Output
_____no_output_____
###Markdown
**Q.** Using that function, what is the gini impurity for the overall y target Solutiongini(y) about 0.66 **Get all y values for rows where `df_wine['flavanoids']`=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['flavanoids']<1.3]righty = y[df_wine['flavanoids']>=1.3] **Q.** What are the gini values for left and right partitions? Solutiongini(lefty), gini(righty) about 0.27, 0.53 **Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values. SolutionLeft partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions. 2 variables
###Code
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
###Output
_____no_output_____
###Markdown
**Q.** Which variable and split point would you choose if you could only split once? SolutionBecause the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good. **Modify the code to view the splits and compare your answer** **Q.** Which variable and split points would you choose next for depth=2? SolutionOnce we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left. **Modify the code to view the splits for depth=2 and compare your answer** GiniLet's examine gini impurity for a different pair of variables.
###Code
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
###Code
lefty = ...
righty = ...
###Output
_____no_output_____
###Markdown
Solutionlefty = y[df_wine['proline']<750]righty = y[df_wine['proline']>=750] **Print out the gini for y, lefty, righty** Solutiongini(y), gini(lefty), gini(righty) Training a single tree and print out the training accuracy (num correct / total)
###Code
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
###Output
_____no_output_____
###Markdown
Take a look at the feature importance:
###Code
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
###Output
_____no_output_____ |
pocovidnet/notebooks/crossval_evaluate.ipynb | ###Markdown
Functions for prediction of video label Evaluation script for cross validation
###Code
saved_logits, saved_gt, saved_files = [], [], []
for i in range(5):
print("------------- SPLIT ", i, "-------------------")
# define data input path
path = "../../data/pocus/cross_validation/split"+str(i)
train_labels, test_labels, test_files = [], [], []
train_data, test_data = [], []
# loop over the image paths (train and test)
for imagePath in paths.list_images(path):
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
test_labels.append(label)
test_data.append(image)
test_files.append(imagePath.split(os.path.sep)[-1])
# build ground truth data
classes = ["covid", "pneumonia", "regular"]
gt_class_idx = np.array([classes.index(lab) for lab in test_labels])
# load model
model = Evaluator(ensemble=False, split=i)
print(model.models)
# MAIN STEP: feed through model and compute logits
logits = np.array([model(img) for img in test_data])
# remember for evaluation:
saved_logits.append(logits)
saved_gt.append(gt_class_idx)
saved_files.append(test_files)
# output the information
predIdxs = np.argmax(logits, axis=1)
print(
classification_report(
gt_class_idx, predIdxs, target_names=classes
)
)
vid_preds_certainty = average_certainty(logits, gt_class_idx, np.array(test_files))
vid_preds_majority = majority_vote(predIdxs, gt_class_idx, np.array(test_files))
print("video accuracies:", vid_preds_certainty, vid_preds_majority)
###Output
------------- SPLIT 0 -------------------
Model restored. Class mappings are ['covid', 'pneunomia', 'regular']
[<tensorflow.python.keras.engine.training.Model object at 0x16dcd2978>]
precision recall f1-score support
covid 0.99 0.93 0.96 120
pneumonia 0.94 1.00 0.97 58
regular 0.89 0.97 0.93 40
accuracy 0.95 218
macro avg 0.94 0.97 0.95 218
weighted avg 0.96 0.95 0.95 218
video accuracies: [['Cov-Butterfly-COVID Lung 2', 0, 0], ['Cov-Butterfly-Confluent B lines_Example 2', 0, 0], ['Cov-clarius', 0, 0], ['Cov-clarius3', 0, 0], ['Cov-grep-7453', 0, 0], ['Pneu-Atlas-pneumonia-AirBronch', 1, 1], ['Pneu-Atlas-pneumonia2', 1, 1], ['Pneu-grep-bacterial-hepatization-clinical', 1, 1], ['Reg-Atlas-alines', 2, 2], ['Reg-Atlas-lungcurtain', 2, 2]] [['Cov-Butterfly-COVID Lung 2', 0, 0], ['Cov-Butterfly-Confluent B lines_Example 2', 0, 0], ['Cov-clarius', 0, 0], ['Cov-clarius3', 0, 0], ['Cov-grep-7453', 0, 0], ['Pneu-Atlas-pneumonia-AirBronch', 1, 1], ['Pneu-Atlas-pneumonia2', 1, 1], ['Pneu-grep-bacterial-hepatization-clinical', 1, 1], ['Reg-Atlas-alines', 2, 2], ['Reg-Atlas-lungcurtain', 2, 2]]
------------- SPLIT 1 -------------------
Model restored. Class mappings are ['covid', 'pneunomia', 'regular']
[<tensorflow.python.keras.engine.training.Model object at 0x13f08d6d8>]
precision recall f1-score support
covid 0.85 0.98 0.91 113
pneumonia 0.96 0.92 0.94 60
regular 1.00 0.52 0.68 31
accuracy 0.89 204
macro avg 0.94 0.81 0.84 204
weighted avg 0.91 0.89 0.88 204
video accuracies: [['Cov-Atlas+(45)', 0, 0], ['Cov-Atlas-Day+3', 0, 0], ['Cov-Butterfly-Consolidation with Air Bronc', 0, 0], ['Cov-Butterfly-Irregular Pleura with Confluent B-lines', 0, 0], ['Cov-Butterfly-Irregular Pleura with Multip', 0, 0], ['Cov-Butterfly-Irregular Pleural Line', 0, 0], ['Cov-grep-7510', 0, 0], ['Cov-grep-7511', 0, 0], ['Pneu-grep-pneumonia2_1', 1, 1], ['Pneu-grep-shredsign-consolidation', 1, 1], ['Reg-Butterfly-2', 0, 2], ['Reg-NormalLungs', 2, 2]] [['Cov-Atlas+(45)', 0, 0], ['Cov-Atlas-Day+3', 0, 0], ['Cov-Butterfly-Consolidation with Air Bronc', 0, 0], ['Cov-Butterfly-Irregular Pleura with Confluent B-lines', 0, 0], ['Cov-Butterfly-Irregular Pleura with Multip', 0, 0], ['Cov-Butterfly-Irregular Pleural Line', 0, 0], ['Cov-grep-7510', 0, 0], ['Cov-grep-7511', 0, 0], ['Pneu-grep-pneumonia2_1', 1, 1], ['Pneu-grep-shredsign-consolidation', 1, 1], ['Reg-Butterfly-2', 0, 2], ['Reg-NormalLungs', 2, 2]]
------------- SPLIT 2 -------------------
Model restored. Class mappings are ['covid', 'pneunomia', 'regular']
[<tensorflow.python.keras.engine.training.Model object at 0x18d7975f8>]
precision recall f1-score support
covid 0.95 0.94 0.94 183
pneumonia 0.92 1.00 0.96 71
regular 0.71 0.55 0.62 22
accuracy 0.92 276
macro avg 0.86 0.83 0.84 276
weighted avg 0.92 0.92 0.92 276
video accuracies: [['Cov-Atlas-+(43)', 0, 0], ['Cov-Atlas-Day+1', 0, 0], ['Cov-Atlas-suspectedCovid', 0, 0], ['Cov-Butterfly-COVID Lung 1', 0, 0], ['Cov-Butterfly-COVID Skip Lesion', 0, 0], ['Cov-Butterfly-Coalescing B lines', 0, 0], ['Cov-Butterfly-Consolidation', 0, 0], ['Cov-Butterfly-Consolidation_Example 3', 0, 0], ['Cov-Butterfly-Irregular Pleura with Trace Effusion', 0, 0], ['Cov-Butterfly-Irregular Pleural Line_Example 2', 0, 0], ['Cov-Butterfly-Subpleural Basal Consolidation_Example 2', 0, 0], ['Cov-grep-7505', 0, 0], ['Cov-grep-7525', 0, 0], ['Cov-grep-7543', 0, 0], ['Pneu-Atlas-pneumonia', 1, 1], ['Pneu-grep-pneumonia1', 1, 1], ['Pneu-grep-pneumonia4', 1, 1], ['Pneu-grep-pulmonary-pneumonia', 1, 1], ['Reg-Grep-Alines', 2, 2], ['Reg-Grep-Normal', 0, 2]] [['Cov-Atlas-+(43)', 0, 0], ['Cov-Atlas-Day+1', 0, 0], ['Cov-Atlas-suspectedCovid', 0, 0], ['Cov-Butterfly-COVID Lung 1', 0, 0], ['Cov-Butterfly-COVID Skip Lesion', 0, 0], ['Cov-Butterfly-Coalescing B lines', 0, 0], ['Cov-Butterfly-Consolidation', 0, 0], ['Cov-Butterfly-Consolidation_Example 3', 0, 0], ['Cov-Butterfly-Irregular Pleura with Trace Effusion', 0, 0], ['Cov-Butterfly-Irregular Pleural Line_Example 2', 0, 0], ['Cov-Butterfly-Subpleural Basal Consolidation_Example 2', 0, 0], ['Cov-grep-7505', 0, 0], ['Cov-grep-7525', 0, 0], ['Cov-grep-7543', 0, 0], ['Pneu-Atlas-pneumonia', 1, 1], ['Pneu-grep-pneumonia1', 1, 1], ['Pneu-grep-pneumonia4', 1, 1], ['Pneu-grep-pulmonary-pneumonia', 1, 1], ['Reg-Grep-Alines', 2, 2], ['Reg-Grep-Normal', 0, 2]]
------------- SPLIT 3 -------------------
Model restored. Class mappings are ['covid', 'pneunomia', 'regular']
[<tensorflow.python.keras.engine.training.Model object at 0x147f47f98>]
precision recall f1-score support
covid 0.77 0.98 0.86 128
pneumonia 0.95 0.86 0.90 42
regular 0.33 0.05 0.09 37
accuracy 0.79 207
macro avg 0.68 0.63 0.62 207
weighted avg 0.73 0.79 0.73 207
video accuracies: [['Cov-Butterfly-Consolidation_Example 2', 0, 0], ['Cov-Butterfly-Consolidation_Example 5', 0, 0], ['Cov-Butterfly-Irregular Pleura and Coalescent B-lines', 0, 0], ['Cov-Butterfly-Patchy B lines with Sparing', 0, 0], ['Cov-Butterfly-Subpleural Basal Consolidation', 0, 0], ['Cov-grep-7507', 0, 0], ['Cov-grepmed-blines-pocus-', 0, 0], ['Pneu-grep-pneumonia3', 1, 1], ['Reg-Atlas', 0, 2], ['Reg-Youtube', 0, 2], ['pneu-everyday', 1, 1], ['pneu-gred-7', 1, 1]] [['Cov-Butterfly-Consolidation_Example 2', 0, 0], ['Cov-Butterfly-Consolidation_Example 5', 0, 0], ['Cov-Butterfly-Irregular Pleura and Coalescent B-lines', 0, 0], ['Cov-Butterfly-Patchy B lines with Sparing', 0, 0], ['Cov-Butterfly-Subpleural Basal Consolidation', 0, 0], ['Cov-grep-7507', 0, 0], ['Cov-grepmed-blines-pocus-', 0, 0], ['Pneu-grep-pneumonia3', 1, 1], ['Reg-Atlas', 0, 2], ['Reg-Youtube', 0, 2], ['pneu-everyday', 1, 1], ['pneu-gred-7', 1, 1]]
------------- SPLIT 4 -------------------
Model restored. Class mappings are ['covid', 'pneunomia', 'regular']
[<tensorflow.python.keras.engine.training.Model object at 0x19f397438>]
precision recall f1-score support
covid 0.85 0.99 0.92 110
pneumonia 1.00 0.89 0.94 46
regular 0.97 0.67 0.79 42
accuracy 0.90 198
macro avg 0.94 0.85 0.88 198
weighted avg 0.91 0.90 0.90 198
video accuracies: [['Cov-Atlas+(44)', 0, 0], ['Cov-Atlas-Day+2', 0, 0], ['Cov-Atlas-Day+4', 0, 0], ['Cov-grepmed2', 0, 0], ['Cov-grepmed3', 0, 0], ['Reg-Butterfly', 2, 2], ['Reg-bcpocus', 0, 2], ['Reg-nephropocus', 2, 2], ['pneu-gred-6', 1, 1], ['pneu-radiopaeda', 1, 1]] [['Cov-Atlas+(44)', 0, 0], ['Cov-Atlas-Day+2', 0, 0], ['Cov-Atlas-Day+4', 0, 0], ['Cov-grepmed2', 0, 0], ['Cov-grepmed3', 0, 0], ['Reg-Butterfly', 2, 2], ['Reg-bcpocus', 0, 2], ['Reg-nephropocus', 2, 2], ['pneu-gred-6', 1, 1], ['pneu-radiopaeda', 1, 1]]
###Markdown
Save outputs
###Code
import pickle
with open("cross_validation_results__myone.dat", "wb") as outfile:
pickle.dump((saved_logits, saved_gt, saved_files), outfile)
###Output
_____no_output_____
###Markdown
Load outputs
###Code
import pickle
with open("cross_validation_results_new.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
###Output
_____no_output_____
###Markdown
Compute scores of our model Sum up confusion matrices
###Code
all_cms = np.zeros((5,3,3))
for s in range(5):
# print(saved_files[s])
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
assert len(gt_s)==len(pred_idx_s)
cm = np.array(confusion_matrix(gt_s, pred_idx_s))
all_cms[s] = cm
###Output
_____no_output_____
###Markdown
Compute the reports and accuracies
###Code
classes = ["covid", "pneunomia", "regular"]
all_reports = []
accs = []
bal_accs = []
for s in range(5):
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
report = classification_report(
gt_s, pred_idx_s, target_names=classes, output_dict=True
)
df = pd.DataFrame(report).transpose()
#print(report["accuracy"])
# print(np.array(df)[:3,:])
accs.append(report["accuracy"])
bal_accs.append(balanced_accuracy_score(gt_s, pred_idx_s))
# df = np.array(report)
all_reports.append(np.array(df)[:3])
###Output
_____no_output_____
###Markdown
Output accuracy
###Code
print("The accuracy and balanced accuracy of our model are:")
print(np.around(accs,2),np.around(bal_accs,2))
print("MEAN ACC:", round(np.mean(accs), 2), "MEAN BAL ACC:", round(np.mean(bal_accs),2))
###Output
The accuracy and balanced accuracy of our model are:
[0.95 0.89 0.92 0.79 0.9 ] [0.97 0.81 0.83 0.63 0.85]
MEAN ACC: 0.89 MEAN BAL ACC: 0.82
###Markdown
Make table of results distinguished by classes Helper functions
###Code
def comp_nr_videos(saved_files):
file_list = []
for sav in saved_files:
file_list.extend(sav)
assert len(np.unique(file_list)) == len(file_list)
cutted_files = [f.split(".")[0] for f in file_list]
print("number of videos", len(np.unique(cutted_files)))
vid_file_labels = [v[:3].lower() for v in np.unique(cutted_files)]
print(len(vid_file_labels))
print(np.unique(vid_file_labels, return_counts=True))
lab, counts = np.unique(vid_file_labels, return_counts=True)
return counts.tolist()
def compute_specificity(all_cms):
"""
Function to compute the specificity from confusion matrices
all_cms: array of size 5 x 3 x 3 --> confusion matrix for each fold
"""
specificities_fold = []
for k in range(len(all_cms)):
arr = all_cms[k]
overall = np.sum(arr)
specificity = []
for i in range(len(arr)):
tn_fp = overall - np.sum(arr[i])
# print(bottom_six)
fp = 0
for j in range(len(arr)):
if i!=j:
fp += arr[j, i]
spec = (tn_fp-fp)/tn_fp
# print("tn", tn_fp-fp, "tn and fp:", tn_fp)
# print(spec)
specificity.append(spec)
specificities_fold.append(specificity)
out_spec = np.mean(np.asarray(specificities_fold), axis=0)
return np.around(out_spec, 2)
df_arr = np.around(np.mean(all_reports, axis=0), 2)
df_classes = pd.DataFrame(df_arr, columns=["Precision", "Recall", "F1-score", "Support"], index=["covid","pneunomia", "regular"])
###Output
_____no_output_____
###Markdown
Add specificit, number of frames etc
###Code
np.sum(np.sum(all_cms, axis=0), axis=1)
df_classes["Specificity"] = np.around(compute_specificity(all_cms),2)
df_classes["Frames"] = np.sum(np.sum(all_cms, axis=0), axis=1).astype(int).tolist()
df_classes["Videos/Images"] = comp_nr_videos(saved_files)
df_classes = df_classes.drop(columns=["Support"])
df_classes
df_classes
# negative predictive value --> gegenstueck zu precision
# specificity --> gegenstück zu recall
###Output
_____no_output_____
###Markdown
Comparison to Covid-NetManually copied data from txt filF-Measure = (2 * Precision * Recall) / (Precision + Recall)
###Code
cm0 = np.array([[1, 5, 34],[0, 56., 2], [0,0,120]])
cm1 = np.array([[0., 0., 31.], [0., 44., 16.], [0., 7., 106.]])
cm2 = np.array([[0,0,22], [0,71,0], [4,0,179]])
cm3 = np.array([[0., 0., 37.], [1, 39,2], [0,0,128]])
cm4 = np.array([[0., 0., 37.], [0,35,7], [0,1, 127]])
# sensitivities
sens_reg = np.mean([ 0.025, 0, 0, 0,0])
sens_pneu = np.mean([0.966, 0.733, 1, 0.929, 0.833])
sens_covid = np.mean([1.0, 0.938, 0.978, 1, 0.992])
# precisions
prec_reg = np.mean([1.0, 0, 0, 0, 0])
prec_pneu = np.mean([0.918, 0.863, 1, 1.0, 0.972])
prec_covid = np.mean([0.769, 0.693, 0.891, 0.766, 0.743])
accs_covidnet = [0.8119266, 0.73529, 0.905797, 0.80676, 0.78260]
all_cms_cov_model = np.array([cm0, cm1, cm2, cm3, cm4])
print(all_cms_cov_model.shape)
def f_measure(prec, rec):
return (2*prec*rec)/(prec+rec)
###Output
_____no_output_____
###Markdown
Output accuracy and balanced accuracy
###Code
added_cms_cov_net = np.sum(all_cms_cov_model, axis=0)
bal_acc_covidnet = np.diag(added_cms_cov_net)/np.sum(added_cms_cov_net, axis=1)
print("The accuracy and balanced accuracy of our model are:")
print(np.around(accs_covidnet,2),np.around(bal_acc_covidnet,2))
print("MEAN ACC:", round(np.mean(accs_covidnet), 2), "MEAN BAL ACC:", round(np.mean(bal_acc_covidnet),2))
###Output
The accuracy and balanced accuracy of our model are:
[0.81 0.74 0.91 0.81 0.78] [0.01 0.9 0.98]
MEAN ACC: 0.81 MEAN BAL ACC: 0.63
###Markdown
Make similar table for covid-net
###Code
sens_reg
df_classes["Class"] = df_classes.index
df_classes.index = ["our model", "our model","our model"]
df_cov = df_classes.copy()
df_cov.index = ["covid-net", "covid-net", "covid-net"]
df_cov["Precision"] = np.around([prec_covid, prec_pneu, prec_reg], 2).tolist()
df_cov["Recall"] = np.around([sens_covid, sens_pneu, sens_reg], 2).tolist()
sens = np.array(compute_specificity(all_cms_cov_model))[[2,1,0]]
df_cov["Specificity"] = sens.tolist()
df_cov["F1-score"] = np.around([f_measure(p, r) for (p,r) in zip(df_cov["Precision"], df_cov["Recall"])], 2)
df_cov
###Output
_____no_output_____
###Markdown
Merge both tables and output final table as latex
###Code
results_together = pd.concat([df_classes, df_cov])
results_together["Sensitivity"] = results_together["Recall"]
results_together = results_together[["Class", "Sensitivity", "Specificity", "Precision", "F1-score", "Frames", "Videos/Images"]]
print(results_together.to_latex())
results_together
###Output
_____no_output_____
###Markdown
Compute video accuracy
###Code
def majority_vote(preds, gt, vid_filenames):
"""
Arguments:
preds: predicted classes (1-d list of class_names or integers)
gt: list of same size with ground truth labels
vid_filenames: list of filenames
"""
preds = np.asarray(preds)
gt = np.asarray(gt)
vids = np.asarray([vid.split(".")[0] for vid in vid_filenames])
vid_preds_out = []
for v in np.unique(vids):
preds_video = preds[vids==v]
gt_check = np.unique(gt[vids==v])
assert len(gt_check)==1, "gt must have the same label for the whole video"
labs, pred_counts = np.unique(preds_video, return_counts=True)
# take label that is predicted most often
vid_pred = labs[np.argmax(pred_counts)]
# print("preds for video:", preds_video)
print(v[:3], vid_pred, gt_check[0])
vid_preds_out.append([v, vid_pred, gt_check[0]])
# print("video accuracy (majority):", accuracy_score([p[1] for p in vid_preds_out], [p[2] for p in vid_preds_out]))
return vid_preds_out
def average_certainty(preds_logits, gt, vid_filenames):
"""
Arguments:
preds: predicted classes (1-d list of class_names or integers)
gt: list of same size with ground truth labels
vid_filenames: list of filenames
"""
preds_logits = np.asarray(preds_logits)
gt = np.asarray(gt)
vid_preds_out = []
vids = np.array([vid.split(".")[0] for vid in vid_filenames])
for v in np.unique(vids):
preds_video_logits = preds_logits[vids==v]
preds_video = np.sum(preds_video_logits, axis=0)
# print("preds for video:", preds_video)
gt_check = np.unique(gt[vids==v])
assert len(gt_check)==1, "gt must have the same label for the whole video"
# take label that is predicted most often
vid_pred = np.argmax(preds_video)
# print(v, vid_pred, gt_check[0])
vid_preds_out.append([v, vid_pred, gt_check[0]])
# print("video accuracy (certainty):", accuracy_score([p[1] for p in vid_preds_out], [p[2] for p in vid_preds_out]))
return vid_preds_out
def preds_to_score(vid_preds_out):
return accuracy_score([p[2] for p in vid_preds_out], [p[1] for p in vid_preds_out])
def preds_to_balanced(vid_preds_out):
# print([p[1] for p in vid_preds_out], [p[2] for p in vid_preds_out])
return balanced_accuracy_score([p[2] for p in vid_preds_out], [p[1] for p in vid_preds_out])
scores_certainty, score_cert_bal = [], []
scores_majority, score_maj_bal = [], []
for i in range(len(saved_files)):
print("-----------", i, "---------")
vid_preds_certainty = average_certainty(saved_logits[i], saved_gt[i], saved_files[i])
vid_preds_majority = majority_vote(np.argmax(saved_logits[i], axis=1), saved_gt[i], saved_files[i])
scores_certainty.append(preds_to_score(vid_preds_certainty))
scores_majority.append(preds_to_score(vid_preds_majority))
score_maj_bal.append(preds_to_balanced(vid_preds_majority))
score_cert_bal.append(preds_to_balanced(vid_preds_certainty))
scores_certainty, scores_majority
score_maj_bal, score_cert_bal
print("RESULTS VIDEO ACCURACY:")
print("Accuracies: ", scores_certainty, "MEAN:", round(np.mean(scores_certainty), 3))
print("Balanced accs:", score_cert_bal, "MEAN:", round(np.mean(score_cert_bal),3))
###Output
RESULTS VIDEO ACCURACY:
Accuracies: [1.0, 0.9166666666666666, 0.95, 0.8333333333333334, 0.9] MEAN: 0.92
Balanced accs: [1.0, 0.8333333333333334, 0.8333333333333334, 0.6666666666666666, 0.8888888888888888] MEAN: 0.844
###Markdown
Confusion matrix plots Load the results
###Code
with open("eval.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
###Output
_____no_output_____
###Markdown
Sum up confusion matrices
###Code
all_cms = np.zeros((5,3,3))
for s in range(5):
# print(saved_files[s])
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
assert len(gt_s)==len(pred_idx_s)
cm = np.array(confusion_matrix(gt_s, pred_idx_s))
all_cms[s] = cm
###Output
_____no_output_____
###Markdown
Function to make labels with std from the data
###Code
def data_to_label(data, text):
return (np.asarray(["{0:.2f}\n".format(data)+u"\u00B1"+"{0:.2f}".format(text) for data, text in zip(data.flatten(), text.flatten())])).reshape(3,3)
###Output
_____no_output_____
###Markdown
Make figure
###Code
plt.figure(figsize = (25,6))
fig = plt.subplot(1,3,1)
ax = fig.axes
data_abs = np.sum(all_cms, axis=0)
df_cm = pd.DataFrame(data_abs, index = [i for i in ["COVID-19", "Pneumonia", "Normal"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Normal"]])
sn.set(font_scale=1.5)
# plt.xticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), rotation=0, fontsize="17", va="center")
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), rotation=0, fontsize="17", va="center")
sn.heatmap(df_cm, annot=True, fmt="g", cmap="YlGnBu")
ax.xaxis.tick_top()
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Absolute values\n", size=30,fontweight="bold")
# PRECISION SUBPLOT
fig = plt.subplot(1,3,2)
ax = fig.axes
data_prec = all_cms.copy()
for i in range(5):
data_prec[i] = data_prec[i]/np.sum(data_prec[i], axis=0)
prec_stds = np.std(data_prec, axis = 0)
data_prec = np.mean(data_prec, axis=0)
labels_prec = data_to_label(data_prec, prec_stds)
df_cm = pd.DataFrame(data_prec, index = [i for i in ["COVID-19", "Pneumonia", "Normal"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Normal"]])
sn.set(font_scale=1.5)
ax.xaxis.tick_top()
plt.ylabel("ground truth")
plt.xlabel("predictions")
plt.title("Precision")
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), rotation=0, fontsize="17", va="center")
sn.heatmap(df_cm, annot=labels_prec, fmt='', cmap="YlGnBu")
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Precision\n", size=30,fontweight="bold")
# SENSITIVITY SUBPLOT
fig = plt.subplot(1,3,3)
ax = fig.axes
data_sens = all_cms.copy()
for i in range(5):
sums_axis = np.sum(data_sens[i], axis=1)
data_sens[i] = np.array([data_sens[i,j,:]/sums_axis[j] for j in range(3)])
sens_stds = np.std(data_sens, axis = 0)
data_sens = np.mean(data_sens, axis=0)
labels_sens = data_to_label(data_sens, sens_stds)
df_cm = pd.DataFrame(data_sens, index = [i for i in ["COVID-19", "Pneumonia", "Normal"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Normal"]])
# sn.set(font_scale=1.5)
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), rotation=0, fontsize="17", va="center")
#plt.xticks(np.arange(3)+0.5,("COVID-19", "Pneunomia", "Normal"), rotation=0, fontsize="17", va="center")
ax.xaxis.tick_top()
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
sn.heatmap(df_cm, annot=labels_sens, fmt='', cmap="YlGnBu")
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Sensitivity (Recall)\n", size=30,fontweight="bold")
plt.savefig("confusion_matrix.pdf",bbox_inches='tight') #, bottom=0.2)
###Output
_____no_output_____
###Markdown
ROC AUC
###Code
from sklearn.metrics import roc_curve, roc_auc_score, precision_score, recall_score
###Output
_____no_output_____
###Markdown
Compute scores and curve
###Code
data, scores, roc_auc_std = [], [], []
max_points = []
for i in range(3):
precs = [[] for _ in range(5)]
recs = [[] for _ in range(5)]
julie_points = [[] for _ in range(5)]
roc_auc = []
for j in range(5):
# roc auc score
preds = saved_logits[j][:, i]
gt = (saved_gt[j] == i).astype(int)
roc_auc.append(roc_auc_score(gt, preds))
# compute roc curve
for k in np.linspace(0,1.1,100):
preds_thresholded = (preds>k).astype(int)
tp = np.sum(preds_thresholded[gt==1])
p = np.sum(gt)
n = len(gt)-p
fp = np.sum(preds_thresholded[gt==0])
inverted = np.absolute(preds_thresholded - 1)
tn = np.sum(inverted[gt==0])
fn = np.sum(inverted[gt==1])
fpr = fp/n
tpr = tp/p
precs[j].append(fpr)
recs[j].append(tpr)
julie_points[j].append((tp+tn)/(tp+tn+fp+fn)) # (TP+TN)/(TP+TN+FN+FP)
# precs[j].append(precision_score(gt, preds_thresholded))
# recs[j].append(recall_score(gt, preds_thresholded))
# append scores
scores.append(round(np.mean(roc_auc),2))
roc_auc_std.append(round(np.std(roc_auc),2))
# take mean and std of fpr and tpr
stds = np.std(np.asarray(recs), axis=0)
precs = np.mean(np.asarray(precs), axis=0)
recs = np.mean(np.asarray(recs), axis=0)
# point of maximum accuracy
julie_points = np.mean(np.asarray(julie_points), axis=0)
max_points.append(np.argmax(julie_points))
data.append((precs, recs, stds))
plt.rcParams['legend.title_fontsize'] = 15
from matplotlib import rc
# activate latex text rendering
rc('text', usetex=False)
cols = ["red", "orange", "green"]
classes = ["COVID-19", "Pneumonia", "Regular"]
# roc_auc_scores = np.mean(np.asarray(scores), axis=0)
plt.figure(figsize=(7,5))
plt.plot([0, 1], [0, 1], color='grey', lw=1.5, linestyle='--')
for i in range(3):
p, r, s = data[i]
# sns.lineplot(x=p, y=r)
# plt.plot(p, r,label=classes[i])
# plt.plot(p,r-s)
lab = classes[i]+" (%.2f"%scores[i]+"$\pm$"+str(roc_auc_std[i])+")"
plt.plot(p, r, 'k-', c=cols[i], label=lab, lw=3)
# print(len(r), max_points[i])
plt.scatter(p[max_points[i]], r[max_points[i]], s=150, marker="o", c=cols[i])
plt.fill_between(p, r-s, r+s, alpha=0.1, facecolor=cols[i])
plt.ylim(0,1.03)
plt.xlim(-0.02,1)
plt.ylabel("$\\bf{Sensitivity}$", fontsize=15)
plt.xlabel("$\\bf{False\ positive\ rate}$", fontsize=15)
plt.legend(fontsize=15, title=" $\\bf{Class}\ \\bf(ROC-AUC)}$") # "\n $\\bf{(o:\ maximal\ accuracy)}$")
plt.title("$\\bf{ROC\ curves}$", fontsize=15)
plt.savefig("roc_curves.pdf", bbox_inches='tight', pad_inches=0, transparent=False)
plt.show()
###Output
_____no_output_____
###Markdown
Compute roc-auc score
###Code
for i in range(3):
roc_auc = []
for j in range(5):
# roc auc score
preds = saved_logits[j][:, i]
gt = (saved_gt[j] == i).astype(int)
# print(preds, gt)
roc_auc.append(roc_auc_score(gt, preds))
print(roc_auc)
gt = (saved_gt[3] == 2)
preds = saved_logits[3][:, 2]
plt.plot(gt)
plt.plot(preds)
roc_auc_score(gt, preds)
###Output
_____no_output_____
###Markdown
Evaluate a single checkpoint
###Code
# Evaluate a checkpoint
from pocovidnet.model import get_model
import cv2
import os
p = '../'
fold = 3
epoch = '07'
weight_path = "/Users/ninawiedemann/Desktop/Projects/covid19_pocus_ultrasound.nosync/pocovidnet/trained_models/lower_lr/fold_3_epoch_05"
# weight_path = os.path.join(p, f'fold_{fold}_epoch_{epoch}') #, 'variables', 'variables')
model = get_model()
model.load_weights(weight_path)
def preprocess(image):
"""Apply image preprocessing pipeline
Arguments:
image {np.array} -- Arbitrary shape, quadratic preferred
Returns:
np.array -- Shape 224,224. Normalized to [0, 1].
"""
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
image = np.expand_dims(np.array(image), 0) / 255.0
return image
path = "../../data/pocus/cross_validation/split"+str(fold)
train_labels, test_labels, test_files = [], [], []
train_data, test_data = [], []
# loop over the image paths (train and test)
for imagePath in paths.list_images(path):
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
test_labels.append(label)
test_data.append(image)
test_files.append(imagePath.split(os.path.sep)[-1])
# build ground truth data
classes = ["covid", "pneumonia", "regular"]
gt_class_idx = np.array([classes.index(lab) for lab in test_labels])
# MAIN STEP: feed through model and compute logits
logits = np.array([model(preprocess(img).astype("float32")) for img in test_data])
# output the information
predIdxs = np.squeeze(np.argmax(logits, axis=-1))
print(
classification_report(
gt_class_idx, predIdxs, target_names=classes
)
)
test_files
saved_files[3]
###Output
_____no_output_____
###Markdown
replace results
###Code
saved_logits[3] = np.squeeze(logits)
saved_gt[3] = gt_class_idx
# RESULTS MODEL IN TRAINED_MODELS / LOWER LR
EPOCH 2:
covid bei 0.0
EPOCH 5:
precision recall f1-score support
covid 0.73 0.62 0.67 128
pneumonia 0.64 0.90 0.75 42
regular 0.23 0.24 0.23 37
accuracy 0.61 207
macro avg 0.53 0.59 0.55 207
weighted avg 0.62 0.61 0.61 207
EPOCH 7:
precision recall f1-score support
covid 0.73 0.80 0.76 128
pneumonia 0.75 0.90 0.82 42
regular 0.14 0.05 0.08 37
accuracy 0.69 207
macro avg 0.54 0.59 0.55 207
weighted avg 0.63 0.69 0.65 207
EPOCH 10 was worse
###Output
_____no_output_____
###Markdown
Old Covid-Net results
###Code
cm0 = np.array([[24., 12., 12.], [ 0., 28., 0.], [29., 4., 30.]])
cm1 = np.array([[ 0., 1., 48.],[ 0., 22., 0.],[ 0., 2., 109.]])
cm2 = np.array([[17., 5., 13.],[ 2., 24., 0.],[ 0., 0, 94.]])
cm3 = np.array([[30., 0., 0.],[ 0., 25., 0.],[ 3., 0, 85.]])
cm4 = np.array([[19., 0., 8.],[ 6., 25., 0.], [ 0., 0., 80.]])
# sensitivities
sens_reg = np.mean([0.5, 0, 0.486, 1.0, 0.704])
sens_pneu = np.mean([1.0, 1.0, 0.923, 1.0, 0.806])
sens_covid = np.mean([0.476, 0.982, 1.0, 0.966, 1.0])
# precisions
prec_reg = np.mean([0.453, 0, 0.895, 0.909, 0.76])
prec_pneu = np.mean([0.636, 0.88, 0.828, 1.0, 1.0])
prec_covid = np.mean([0.714, 0.694, 0.879, 1.0, 0.909])
accs_covidnet = [0.58992805, 0.719, 0.871, 0.979, 0.89855]
all_cms_cov_model = np.array([cm0, cm1, cm2, cm3, cm4])
print(all_cms_cov_model.shape)
###Output
_____no_output_____
###Markdown
Evaluation script for cross validation
###Code
saved_logits, saved_gt, saved_files = [], [], []
for i in range(5):
print("------------- SPLIT ", i, "-------------------")
# define data input path
path = "../../data/cross_validation/split"+str(i)
train_labels, test_labels, test_files = [], [], []
train_data, test_data = [], []
# loop over the image paths (train and test)
for imagePath in paths.list_images(path):
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
test_labels.append(label)
test_data.append(image)
test_files.append(imagePath.split(os.path.sep)[-1])
# build ground truth data
gt_class_idx = np.array([CLASSES.index(lab) for lab in test_labels])
model = None
# load model
model = Evaluator(weights_dir="NasNet_F", ensemble=False, split=i, num_classes=len(CLASSES), model_id="nasnet")
print("testing on n_files:", len(test_data))
# MAIN STEP: feed through model and compute logits
logits = np.array([model(img) for img in test_data])
# remember for evaluation:
saved_logits.append(logits)
saved_gt.append(gt_class_idx)
saved_files.append(test_files)
# output the information
predIdxs = np.argmax(logits, axis=1)
print(
classification_report(
gt_class_idx, predIdxs, target_names=CLASSES
)
)
vid_preds_certainty = average_certainty(logits, gt_class_idx, np.array(test_files))
vid_preds_majority = majority_vote(predIdxs, gt_class_idx, np.array(test_files))
print("video accuracies:", vid_preds_certainty, vid_preds_majority)
import pickle
with open("NASF_fold4.dat", "wb") as outfile:
pickle.dump((logits, gt_class_idx, test_files), outfile)
###Output
_____no_output_____
###Markdown
Save outputs
###Code
import pickle
with open("model_comparison/results_segment.dat", "wb") as outfile:
pickle.dump((saved_logits, saved_gt, saved_files), outfile)
###Output
_____no_output_____
###Markdown
collect single folds
###Code
saved_logits, saved_gt, saved_files = [], [], []
for i in range(5):
with open("NASF_fold"+str(i)+".dat", "rb") as outfile:
(logits, gt, files) = pickle.load(outfile)
saved_logits.append(logits)
saved_gt.append(gt)
saved_files.append(files)
###Output
_____no_output_____
###Markdown
Transform from uninformative class ones to general
###Code
new_logits, new_gt, new_files = [], [], []
counter = 0
for i in range(5):
gt_inds = np.where(np.array(saved_gt[i])<3)[0]
counter += len(gt_inds)
new_logits.append(np.array(saved_logits[i])[gt_inds, :3])
new_gt.append(np.array(saved_gt[i])[gt_inds])
new_files.append(np.array(saved_files[i])[gt_inds])
import pickle
with open("../encoding_3.dat", "wb") as outfile:
pickle.dump((new_logits, new_gt, new_files), outfile)
###Output
_____no_output_____
###Markdown
Load outputs (takes the dat files that was saved from the evaluation above)
###Code
import pickle #
with open("../encoding_3.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
CLASSES = ["covid", "pneumonia", "regular"] # , "uninformative"]
###Output
_____no_output_____
###Markdown
Compute scores of our model Compute the reports and accuracies
###Code
all_reports = []
accs = []
bal_accs = []
# vid_accs, _, vid_accs_bal, _ = video_accuracy(saved_logits, saved_gt, saved_files)
for s in range(5):
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
report = classification_report(
gt_s, pred_idx_s, target_names=CLASSES, output_dict=True
)
mcc_scores = mcc_multiclass(gt_s, pred_idx_s)
spec_scores = specificity(gt_s, pred_idx_s)
for i, cl in enumerate(CLASSES):
report[cl]["mcc"] = mcc_scores[i]
report[cl]["specificity"] = spec_scores[i]
df = pd.DataFrame(report).transpose()
df = df.drop(columns="support")
df["accuracy"] = [report["accuracy"] for _ in range(len(df))]
bal = balanced_accuracy_score(gt_s, pred_idx_s)
df["balanced"] = [bal for _ in range(len(df))]
# df["video"] = vid_accs[s]
# df["video_balanced"] = vid_accs_bal[s]
# print(df[:len(CLASSES)])
# print(np.array(df)[:3,:])
accs.append(report["accuracy"])
bal_accs.append(balanced_accuracy_score(gt_s, pred_idx_s))
# df = np.array(report)
all_reports.append(np.array(df)[:len(CLASSES)])
df_arr = np.around(np.mean(all_reports, axis=0), 2)
df_classes = pd.DataFrame(df_arr, columns=["Precision", "Recall", "F1-score", "MCC", "Specificity", "Accuracy", "Balanced"], index=CLASSES)
df_classes
df_std = np.around(np.std(all_reports, axis=0), 2)
df_std = pd.DataFrame(df_std, columns=["Precision", "Recall", "F1-score", "MCC", "Specificity", "Accuracy", "Balanced"], index=CLASSES)
df_std
df_classes = df_classes[["Accuracy", "Balanced", "Precision", "Recall","Specificity", "F1-score", "MCC"]]
df_std = df_std[["Accuracy", "Balanced", "Precision", "Recall","Specificity", "F1-score", "MCC"]]
df_classes.to_csv("model_comparison/encoding_3_mean.csv")
df_std.to_csv("model_comparison/encoding_3_std.csv")
###Output
_____no_output_____
###Markdown
Output accuracy
###Code
print("The accuracy and balanced accuracy of our model are:")
print(np.around(accs,2),np.around(bal_accs,2))
print("MEAN ACC:", round(np.mean(accs), 2), "MEAN BAL ACC:", round(np.mean(bal_accs),2))
print("The accuracy and balanced accuracy of our model are:")
print(np.around(accs,2),np.around(bal_accs,2))
print("MEAN ACC:", round(np.mean(accs), 2), "MEAN BAL ACC:", round(np.mean(bal_accs),2))
###Output
The accuracy and balanced accuracy of our model are:
[0.82 0.92 0.93 0.98 0.81] [0.8 0.9 0.91 0.96 0.67]
MEAN ACC: 0.89 MEAN BAL ACC: 0.85
###Markdown
Make table of results distinguished by classes Helper functions
###Code
def comp_nr_videos(saved_files):
file_list = []
for sav in saved_files:
file_list.extend(sav)
assert len(np.unique(file_list)) == len(file_list)
cutted_files = [f.split(".")[0] for f in file_list]
print("number of videos", len(np.unique(cutted_files)))
vid_file_labels = [v[:3].lower() for v in np.unique(cutted_files)]
print(len(vid_file_labels))
print(np.unique(vid_file_labels, return_counts=True))
lab, counts = np.unique(vid_file_labels, return_counts=True)
return counts.tolist()
def compute_specificity(all_cms):
"""
Function to compute the specificity from confusion matrices
all_cms: array of size 5 x 3 x 3 --> confusion matrix for each fold
"""
specificities_fold = []
for k in range(len(all_cms)):
arr = all_cms[k]
overall = np.sum(arr)
specificity = []
for i in range(len(arr)):
tn_fp = overall - np.sum(arr[i])
# print(bottom_six)
fp = 0
for j in range(len(arr)):
if i!=j:
fp += arr[j, i]
spec = (tn_fp-fp)/tn_fp
# print("tn", tn_fp-fp, "tn and fp:", tn_fp)
# print(spec)
specificity.append(spec)
specificities_fold.append(specificity)
out_spec = np.mean(np.asarray(specificities_fold), axis=0)
return np.around(out_spec, 2)
###Output
_____no_output_____
###Markdown
Sum up confusion matrices
###Code
all_cms = np.zeros((5,3,3))
for s in range(5):
# print(saved_files[s])
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
assert len(gt_s)==len(pred_idx_s)
cm = np.array(confusion_matrix(gt_s, pred_idx_s))
all_cms[s] = cm
###Output
_____no_output_____
###Markdown
Add specificit, number of frames etc
###Code
np.sum(np.sum(all_cms, axis=0), axis=1)
df_classes["Specificity"] = np.around(compute_specificity(all_cms),2)
df_classes["Frames"] = np.sum(np.sum(all_cms, axis=0), axis=1).astype(int).tolist()
# df_classes["Videos/Images"] = comp_nr_videos(saved_files)
# df_classes = df_classes.drop(columns=["Support"])
df_classes.to_csv("average_scores.csv")
# OLD MODEL:
df_classes
###Output
_____no_output_____
###Markdown
Comparison to Covid-NetManually copied data from txt filF-Measure = (2 * Precision * Recall) / (Precision + Recall)
###Code
cm0 = np.array([[1, 5, 34],[0, 56., 2], [0,0,120]])
cm1 = np.array([[0., 0., 31.], [0., 44., 16.], [0., 7., 106.]])
cm2 = np.array([[0,0,22], [0,71,0], [4,0,179]])
cm3 = np.array([[0., 0., 37.], [1, 39,2], [0,0,128]])
cm4 = np.array([[0., 0., 37.], [0,35,7], [0,1, 127]])
# sensitivities
sens_reg = np.mean([ 0.025, 0, 0, 0,0])
sens_pneu = np.mean([0.966, 0.733, 1, 0.929, 0.833])
sens_covid = np.mean([1.0, 0.938, 0.978, 1, 0.992])
# precisions
prec_reg = np.mean([1.0, 0, 0, 0, 0])
prec_pneu = np.mean([0.918, 0.863, 1, 1.0, 0.972])
prec_covid = np.mean([0.769, 0.693, 0.891, 0.766, 0.743])
accs_covidnet = [0.8119266, 0.73529, 0.905797, 0.80676, 0.78260]
all_cms_cov_model = np.array([cm0, cm1, cm2, cm3, cm4])
print(all_cms_cov_model.shape)
def f_measure(prec, rec):
return (2*prec*rec)/(prec+rec)
###Output
_____no_output_____
###Markdown
Output accuracy and balanced accuracy
###Code
added_cms_cov_net = np.sum(all_cms_cov_model, axis=0)
bal_acc_covidnet = np.diag(added_cms_cov_net)/np.sum(added_cms_cov_net, axis=1)
print("The accuracy and balanced accuracy of our model are:")
print(np.around(accs_covidnet,2),np.around(bal_acc_covidnet,2))
print("MEAN ACC:", round(np.mean(accs_covidnet), 2), "MEAN BAL ACC:", round(np.mean(bal_acc_covidnet),2))
###Output
The accuracy and balanced accuracy of our model are:
[0.81 0.74 0.91 0.81 0.78] [0.01 0.9 0.98]
MEAN ACC: 0.81 MEAN BAL ACC: 0.63
###Markdown
Make similar table for covid-net
###Code
sens_reg
df_classes["Class"] = df_classes.index
df_classes.index = ["our model", "our model","our model"]
df_cov = df_classes.copy()
df_cov.index = ["covid-net", "covid-net", "covid-net"]
df_cov["Precision"] = np.around([prec_covid, prec_pneu, prec_reg], 3).tolist()
df_cov["Recall"] = np.around([sens_covid, sens_pneu, sens_reg], 3).tolist()
sens = np.array(compute_specificity(all_cms_cov_model))[[2,1,0]]
df_cov["Specificity"] = sens.tolist()
df_cov["F1-score"] = np.around([f_measure(p, r) for (p,r) in zip([prec_covid, prec_pneu, prec_reg], [sens_covid, sens_pneu, sens_reg])], 2)
df_cov
###Output
_____no_output_____
###Markdown
Merge both tables and output final table as latex
###Code
results_together = pd.concat([df_classes, df_cov])
results_together["Sensitivity"] = results_together["Recall"]
results_together = results_together[["Class", "Sensitivity", "Specificity", "Precision", "F1-score", "Frames", "Videos/Images"]]
print(results_together.to_latex())
results_together
###Output
_____no_output_____
###Markdown
Compute video accuracy
###Code
def video_accuracy(saved_logits, saved_gt, saved_files):
def preds_to_score(vid_preds_out):
return accuracy_score([p[2] for p in vid_preds_out], [p[1] for p in vid_preds_out])
def preds_to_balanced(vid_preds_out):
# print([p[1] for p in vid_preds_out], [p[2] for p in vid_preds_out])
return balanced_accuracy_score([p[2] for p in vid_preds_out], [p[1] for p in vid_preds_out])
scores_certainty, score_cert_bal = [], []
scores_majority, score_maj_bal = [], []
for i in range(len(saved_files)):
# print("-----------", i, "---------")
filenames = np.array(saved_files[i])
only_videos = np.where(np.array([len(name.split("."))==3 for name in filenames]))[0]
# print(len(only_videos), len(filenames))
logits_in = np.array(saved_logits[i])[only_videos]
files_in = filenames[only_videos]
gt_in = np.array(saved_gt[i])[only_videos]
vid_preds_certainty = average_certainty(logits_in, gt_in, files_in)
vid_preds_majority = majority_vote(np.argmax(logits_in, axis=1), gt_in, files_in)
scores_certainty.append(preds_to_score(vid_preds_certainty))
scores_majority.append(preds_to_score(vid_preds_majority))
score_maj_bal.append(preds_to_balanced(vid_preds_majority))
score_cert_bal.append(preds_to_balanced(vid_preds_certainty))
# print("certainty:", scores_certainty)
# print("majority:", scores_majority)
return scores_certainty, scores_majority, score_maj_bal, score_cert_bal
scores_certainty, scores_majority, score_maj_bal, score_cert_bal = video_accuracy(saved_logits, saved_gt, saved_files)
scores_certainty, scores_majority
score_maj_bal, score_cert_bal
print("RESULTS VIDEO ACCURACY:")
print("Accuracies: ", scores_certainty, "MEAN:", round(np.mean(scores_certainty), 3))
print("Balanced accs:", score_cert_bal, "MEAN:", round(np.mean(score_cert_bal),3))
print("RESULTS VIDEO ACCURACY:")
print("Accuracies: ", scores_certainty, "MEAN:", round(np.mean(scores_certainty), 3))
print("Balanced accs:", score_cert_bal, "MEAN:", round(np.mean(score_cert_bal),3))
print("number of images in each split")
for file_list in saved_files:
cutted_files = [files.split(".")[0] for files in file_list]
# print(np.unique(cutted_files))
image_number = [len(files.split("."))!=3 for files in file_list]
print(np.sum(image_number))
###Output
number of images in each split
0
5
0
15
8
###Markdown
Confusion matrix plots Load the results
###Code
with open("cross_val_cam_3.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
###Output
_____no_output_____
###Markdown
Sum up confusion matrices
###Code
all_cms = np.zeros((5,3,3))
for s in range(5):
# print(saved_files[s])
gt_s = saved_gt[s]
pred_idx_s = np.argmax(np.array(saved_logits[s]), axis=1)
assert len(gt_s)==len(pred_idx_s)
cm = np.array(confusion_matrix(gt_s, pred_idx_s))
all_cms[s] = cm
###Output
_____no_output_____
###Markdown
Function to make labels with std from the data
###Code
def data_to_label(data, text):
return (np.asarray(["{0:.2f}\n".format(data)+u"\u00B1"+"{0:.2f}".format(text) for data, text in zip(data.flatten(), text.flatten())])).reshape(3,3)
###Output
_____no_output_____
###Markdown
Make figure
###Code
plt.figure(figsize = (25,6))
fig = plt.subplot(1,3,1)
ax = fig.axes
data_abs = np.sum(all_cms, axis=0)
df_cm = pd.DataFrame(data_abs, index = [i for i in ["COVID-19", "Pneumonia", "Healthy"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Healthy"]])
sn.set(font_scale=1.5)
# plt.xticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), rotation=0, fontsize="17", va="center")
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Healthy"), rotation=0, fontsize="17", va="center")
sn.heatmap(df_cm, annot=True, fmt="g", cmap="YlGnBu")
ax.xaxis.tick_top()
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Absolute values\n", size=30,fontweight="bold")
# PRECISION SUBPLOT
fig = plt.subplot(1,3,2)
ax = fig.axes
data_prec = all_cms.copy()
for i in range(5):
data_prec[i] = data_prec[i]/np.sum(data_prec[i], axis=0)
prec_stds = np.std(data_prec, axis = 0)
data_prec = np.mean(data_prec, axis=0)
labels_prec = data_to_label(data_prec, prec_stds)
df_cm = pd.DataFrame(data_prec, index = [i for i in ["COVID-19", "Pneumonia", "Healthy"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Healthy"]])
sn.set(font_scale=1.5)
ax.xaxis.tick_top()
plt.ylabel("ground truth")
plt.xlabel("predictions")
plt.title("Precision")
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Healthy"), rotation=0, fontsize="17", va="center")
sn.heatmap(df_cm, annot=labels_prec, fmt='', cmap="YlGnBu")
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Precision\n", size=30,fontweight="bold")
plt.savefig("confusion_matrix_newdata.pdf",bbox_inches='tight') #, bottom=0.2)
# SENSITIVITY SUBPLOT
fig = plt.subplot(1,3,3)
ax = fig.axes
data_sens = all_cms.copy()
for i in range(5):
sums_axis = np.sum(data_sens[i], axis=1)
data_sens[i] = np.array([data_sens[i,j,:]/sums_axis[j] for j in range(3)])
sens_stds = np.std(data_sens, axis = 0)
data_sens = np.mean(data_sens, axis=0)
labels_sens = data_to_label(data_sens, sens_stds)
df_cm = pd.DataFrame(data_sens, index = [i for i in ["COVID-19", "Pneumonia", "Healthy"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Healthy"]])
# sn.set(font_scale=1.5)
plt.yticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Healthy"), rotation=0, fontsize="17", va="center")
#plt.xticks(np.arange(3)+0.5,("COVID-19", "Pneunomia", "Normal"), rotation=0, fontsize="17", va="center")
ax.xaxis.tick_top()
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
sn.heatmap(df_cm, annot=labels_sens, fmt='', cmap="YlGnBu")
plt.xlabel('\nPredictions', size=25)
plt.ylabel('Ground truth', size=25)
plt.title("Sensitivity (Recall)\n", size=30,fontweight="bold")
plt.savefig("confusion_matrix_all.pdf",bbox_inches='tight') #, bottom=0.2)
###Output
_____no_output_____
###Markdown
ROC AUC
###Code
from sklearn.metrics import roc_curve, roc_auc_score, precision_score, recall_score
###Output
_____no_output_____
###Markdown
Compute scores and curve
###Code
base_eval_points = np.linspace(0,1,200,endpoint=True)
def roc_auc(saved_logits, saved_gt):
data, scores, roc_auc_std = [], [], []
max_points = []
for i in range(3):
out_roc = np.zeros((5, len(base_fpr)))
out_prec = np.zeros((5, len(base_rec)))
roc_auc = []
max_acc = []
# Iterate over folds
for k in range(5):
# get binary predictions for this class
gt = (saved_gt[k] == i).astype(int)
# pred = saved_logits[k][:, i]
if np.any(saved_logits[k]<0):
pred = np.exp(np.array(saved_logits[k]))[:, i]
else:
pred = np.array(saved_logits[k])[:, i]
roc_auc.append(roc_auc_score(gt, pred))
precs, recs, fprs, julie_points = [], [], [], []
for j, thresh in enumerate(np.linspace(0,1.1,100, endpoint=True)):
preds_thresholded = (pred>thresh).astype(int)
tp = np.sum(preds_thresholded[gt==1])
p = np.sum(gt)
n = len(gt)-p
fp = np.sum(preds_thresholded[gt==0])
inverted = np.absolute(preds_thresholded - 1)
tn = np.sum(inverted[gt==0])
fn = np.sum(inverted[gt==1])
fpr = fp/float(n)
tpr = tp/float(p)
if tp+fp ==0:
precs.append(1)
else:
precs.append(tp/(tp+fp))
recs.append(tpr)
fprs.append(fpr)
julie_points.append((tp+tn)/(tp+tn+fp+fn))
# clean
recs = np.asarray(recs)
precs = np.asarray(precs)
fprs = np.asarray(fprs)
sorted_inds = np.argsort(recs)
# prepare for precision-recall curve
precs_sorted = precs[sorted_inds]
recs_sorted = recs[sorted_inds]
precs_cleaned = precs_sorted[recs_sorted>0]
recs_cleaned = recs_sorted[recs_sorted>0]
precs_inter = np.interp(base_eval_points, recs_cleaned, precs_cleaned)
# prepare for roc-auc curve
sorted_inds = np.argsort(fprs)
recs_fpr_sorted = recs[sorted_inds]
fprs_sorted = fprs[sorted_inds]
roc_inter = np.interp(base_eval_points, fprs_sorted, recs_fpr_sorted)
# append current fold
out_prec[k] = precs_inter
out_roc[k] = roc_inter
# compute recall of max acc:
max_acc.append(recs[np.argmax(julie_points)])
# out_curve = np.mean(np.asarray(out_curve), axis=0)
prec_mean = np.mean(out_prec, axis=0)
prec_std = np.std(out_prec, axis=0)
roc_mean = np.mean(out_roc, axis=0)
roc_std = np.std(out_roc, axis=0)
# append scores
scores.append(round(np.mean(roc_auc),2))
roc_auc_std.append(round(np.std(roc_auc),2))
# point of maximum accuracy
max_points.append(np.mean(max_acc))
data.append((roc_mean, roc_std, prec_mean, prec_std))
return data, max_points, scores, roc_auc_std
def closest(in_list, point):
return np.argmin(np.absolute(np.asarray(in_list)-point))
from matplotlib import rc
plt.rcParams['legend.title_fontsize'] = 20
# plt.rcParams['axes.facecolor'] = 'white'
# activate latex text rendering
rc('text', usetex=False)
###Output
_____no_output_____
###Markdown
Load data
###Code
with open("cross_val_cam_3.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
data, max_points, scores, roc_auc_std = roc_auc(saved_logits, saved_gt)
cols = ["red", "orange", "green"]
classes = ["COVID-19", "Pneumonia", "Healthy"]
###Output
_____no_output_____
###Markdown
ROC class comparison
###Code
plt.figure(figsize=(6,5))
plt.plot([0, 1], [0, 1], color='grey', lw=1.5, linestyle='--')
for i in range(3):
roc_mean, roc_std, _, _ = data[i]
lab = classes[i]+" (%.2f"%scores[i]+"$\pm$"+str(roc_auc_std[i])+")"
plt.plot(base_eval_points, roc_mean, 'k-', c=cols[i], label=lab, lw=3)
# print(len(r), max_points[i])
# print(base_eval_points[closest(roc_mean, max_points[i])], max_points[i])
plt.scatter(base_eval_points[closest(roc_mean, max_points[i])], max_points[i], s=150, marker="o", c=cols[i])
plt.fill_between(base_eval_points, roc_mean-roc_std, roc_mean+roc_std, alpha=0.1, facecolor=cols[i])
plt.ylim(0,1.03)
plt.xlim(-0.02,1)
plt.ylabel("$\\bf{Sensitivity}$", fontsize=20)
plt.xlabel("$\\bf{False\ positive\ rate}$", fontsize=20)
plt.legend(fontsize=18, title=" $\\bf{Class}\ \\bf(ROC-AUC)}$") # "\n $\\bf{(o:\ maximal\ accuracy)}$")
# plt.title("$\\bf{ROC\ curves}$", fontsize=15)
plt.savefig("new_plots/roc_curves_cam.pdf", bbox_inches='tight', pad_inches=0, transparent=True)
plt.show()
plt.figure(figsize=(6,5))
plt.plot([1, 0], [0, 1], color='grey', lw=1.5, linestyle='--')
for i in range(3):
_, _, prec_mean, prec_std = data[i]
# prec_cleaned = prec[rec>0]
# rec_cleaned = rec[rec>0]
# s2_cleaned = s2[rec>0]
lab = classes[i] # +" (%.2f"%scores[i]+"$\pm$"+str(roc_auc_std[i])+")"
plt.plot(base_eval_points, prec_mean, 'k-', c=cols[i], label=lab, lw=3)
plt.fill_between(base_eval_points, prec_mean-prec_std, prec_mean+prec_std, alpha=0.1, facecolor=cols[i])
plt.ylim(0,1.03)
plt.xlim(-0.02,1)
plt.ylabel("$\\bf{Precision}$", fontsize=20)
plt.xlabel("$\\bf{Recall}$", fontsize=20)
plt.legend(fontsize=18, title=" $\\bf{Class}$") # "\n $\\bf{(o:\ maximal\ accuracy)}$")
# plt.title("$\\bf{ROC\ curves}$", fontsize=15)
plt.savefig("new_plots/prec_rec_curves_cam.pdf", bbox_inches='tight', pad_inches=0, transparent=True)
plt.show()
from matplotlib import rc
plt.rcParams['legend.title_fontsize'] = 15
###Output
_____no_output_____
###Markdown
ROC-curve across models
###Code
CLASS = 1
name_dict = {"cross_val_gradcam_3":"VGG","cross_val_cam_3":"VGG-CAM", "NAS_B_3":"NASNetMobile","encoding_3":"Segment-Enc", "results_segment_3":"VGG-Segment"}
cols = ["red", "orange", "green", "blue", "purple"]
classes = ["COVID-19", "Pneumonia", "Healthy"]
# roc_auc_scores = np.mean(np.asarray(scores), axis=0)
fig = plt.figure(figsize=(6,5))
# plt.subplot(1,3,1)
plt.plot([0, 1], [0, 1], color='grey', lw=1.5, linestyle='--')
for i, model_data in enumerate(["cross_val_gradcam_3.dat", "NAS_B_3.dat", "cross_val_cam_3.dat", "encoding_3.dat", "results_segment_3.dat"]):
with open(model_data, "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
data, max_points, scores, roc_auc_std = roc_auc(saved_logits, saved_gt)
roc_mean, roc_std, _, _ = data[CLASS]
lab = name_dict[model_data.split(".")[0]]+" (%.2f"%scores[CLASS]+"$\pm$"+str(roc_auc_std[CLASS])+")"
plt.plot(base_eval_points, roc_mean, 'k-', c=cols[i], label=lab, lw=3)
plt.scatter(base_eval_points[closest(roc_mean, max_points[CLASS])], max_points[CLASS], s=150, marker="o", c=cols[i])
plt.fill_between(base_eval_points, roc_mean-roc_std, roc_mean+roc_std, alpha=0.1, facecolor=cols[i])
# plt.ylim(0,1.03)
#
# # roc auc plotting
# fp, prec, rec, s, s2 = data[CLASS]
# lab = name_dict[model_data.split(".")[0]]+" (%.2f"%scores[CLASS]+"$\pm$"+str(roc_auc_std[CLASS])+")"
# plt.plot(fp, rec, 'k-', c=cols[i], label=lab, lw=3)
# # print(len(r), max_points[i])
# plt.scatter(fp[max_points[CLASS]], rec[max_points[CLASS]], s=150, marker="o", c=cols[i])
# plt.fill_between(fp, rec-s, rec+s, alpha=0.1, facecolor=cols[i])
plt.ylim(0,1.01)
plt.xlim(-0.02,1)
plt.ylabel("$\\bf{Sensitivity}$", fontsize=15)
plt.xlabel("$\\bf{False\ positive\ rate}$", fontsize=15)
plt.legend(fontsize=15, title=" $\\bf{Model}\ \\bf(ROC-AUC)}$") # "\n $\\bf{(o:\ maximal\ accuracy)}$")
# plt.title("ROC-curve (COVID-19)", fontsize=20)
plt.savefig("new_plots/roc_curve"+str(CLASS)+".pdf", bbox_inches='tight', pad_inches=0, transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Precision-recall-curve across models
###Code
CLASS = 0
fig = plt.figure(figsize=(6,5))
for i, model_data in enumerate(["cross_val_gradcam_3.dat", "NAS_B_3.dat", "cross_val_cam_3.dat", "encoding_3.dat", "results_segment_3.dat"]):
with open(model_data, "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
data, max_points, scores, roc_auc_std = roc_auc(saved_logits, saved_gt)
_, _, prec_mean, prec_std = data[CLASS]
lab = name_dict[model_data.split(".")[0]]
plt.plot(base_eval_points, prec_mean, 'k-', c=cols[i], label=lab, lw=3)
plt.fill_between(base_eval_points, prec_mean-prec_std, prec_mean+prec_std, alpha=0.1, facecolor=cols[i])
# data, max_points, scores, roc_auc_std = roc_auc(saved_logits, saved_gt)
# # roc auc plotting
# fp, prec, rec, s, s2 = data[CLASS]
# prec_clean = np.asarray(prec)
# rec_clean = np.asarray(rec)
# prec_clean = prec_clean[rec_clean>0]
# s2_clean = np.asarray(s2)[rec_clean>0]
# rec_clean = rec_clean[rec_clean>0]
# lab = name_dict[model_data.split(".")[0]] # +" (%.2f"%scores[0]+"$\pm$"+str(roc_auc_std[0])+")"
# plt.plot(rec_clean, prec_clean, 'k-', c=cols[i], label=lab, lw=3)
# # plt.plot(rec_cheat, prec_cheat, 'k-', c=cols[i], label=lab, lw=3)
# # print(len(r), max_points[i])
# # plt.scatter(prec[max_points[0]], rec[max_points[0]], s=150, marker="o", c=cols[i])
# plt.fill_between(rec, prec-s2, prec+s2, alpha=0.1, facecolor=cols[i])
plt.ylim(0,1.01)
plt.xlim(-0.02,1.02)
plt.ylabel("$\\bf{Precision}$", fontsize=15)
plt.xlabel("$\\bf{Recall}$", fontsize=15)
plt.legend(fontsize=15, title=" $\\bf{Model}}$") # "\n $\\bf{(o:\ maximal\ accuracy)}$")
# plt.title("Precision-Recall-curve (Healthy)", fontsize=20)
plt.savefig("new_plots/prec_rec_"+str(CLASS)+".pdf", bbox_inches='tight', pad_inches=0, transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
fig = plt.figure(figsize=(6,5))
ax = fig.axes
# ABSOLUTE
# data_confusion = np.sum(all_cms, axis=0)
# PRECISION
# data_confusion = all_cms.copy()
# for i in range(5):
# data_confusion[i] = data_confusion[i]/np.sum(data_confusion[i], axis=0)
# prec_stds = np.std(data_confusion, axis = 0)
# data_confusion = np.mean(data_confusion, axis=0)
# labels = data_to_label(data_confusion, prec_stds)
# SENSITIVITY
data_confusion = all_cms.copy()
for i in range(5):
sums_axis = np.sum(data_confusion[i], axis=1)
data_confusion[i] = np.array([data_confusion[i,j,:]/sums_axis[j] for j in range(3)])
sens_stds = np.std(data_confusion, axis = 0)
data_confusion = np.mean(data_confusion, axis=0)
labels = data_to_label(data_confusion, sens_stds)
# ACTUAL PLOT
df_cm = pd.DataFrame(data_confusion, index = [i for i in ["COVID-19", "Pneumonia", "Healthy"]],
columns = [i for i in ["COVID-19", "Pneumonia", "Healthy"]])
sn.set(font_scale=1.8)
plt.xticks(np.arange(3)+0.5,("COVID-19", "Pneumonia", "Normal"), fontsize="18", va="center")
plt.yticks(np.arange(3)+0.5,("C", "P", "H"), rotation=0, fontsize="18", va="center")
# sn.heatmap(df_cm, annot=True, fmt="g", cmap="YlGnBu")
sn.heatmap(df_cm, annot=labels, fmt='', cmap="YlGnBu")
# ax.xaxis.tick_bottom()
plt.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=True)
plt.xlabel("$\\bf{Predictions}$", fontsize=20)
plt.ylabel("$\\bf{Ground\ truth}$", fontsize=20)
# plt.title("Confusion matrix (VGG2)", fontsize=20) # "Absolute values\n", size=30,fontweight="bold")
plt.savefig("new_plots/conf_matrix_cam_sens.pdf", bbox_inches='tight', pad_inches=0, transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Compute roc-auc score
###Code
from sklearn.metrics import precision_score, recall_score, precision_recall_curve
for i in range(3):
roc_auc = []
for j in range(5):
# roc auc score
preds = saved_logits[j][:, i]
gt = (saved_gt[j] == i).astype(int)
# print(preds, gt)
roc_auc.append(roc_auc_score(gt, preds))
print(roc_auc)
###Output
[0.892032967032967, 0.973656549730653, 0.9834574417947026, 0.9990873599351012, 0.7984615384615384]
[0.9892034892034892, 0.9997367728349567, 0.9484573502722323, 0.9970982142857143, 0.8903361344537815]
[0.8470566660553823, 0.9701415701415701, 0.9848677248677248, 0.9994854076306696, 0.9768707482993197]
###Markdown
Save predictions in csv (from logits)
###Code
import pickle
with open("cross_val_gradcam_4.dat", "rb") as outfile:
(saved_logits, saved_gt, saved_files) = pickle.load(outfile)
CLASSES = ["covid", "pneumonia", "regular", "uninformative"]
dfs = []
for i in range(5):
df = pd.DataFrame()
df["fold"] = [i for _ in range(len(saved_gt[i]))]
df["filename"] = saved_files[i]
df["ground_truth"] = saved_gt[i]
df["prediction"] = np.argmax(saved_logits[i], axis=1)
df["probability"] = np.max(saved_logits[i], axis=1)
dfs.append(df)
together = pd.concat(dfs)
print(together.head())
print("number of files", len(together))
print("Accuracy for all predictions", np.sum(together["ground_truth"].values == together["prediction"].values)/len(together))
relevant_classes = together[together["ground_truth"]<3]
print(len(relevant_classes))
print("Accuracy for covid pneu relular predictions", np.sum(relevant_classes["ground_truth"].values == relevant_classes["prediction"].values)/len(relevant_classes))
# SAVE
together.to_csv("predictions_vgg_4.csv", index=False)
relevant_classes.to_csv("predictions_vgg_3.csv")
###Output
fold filename ground_truth prediction \
0 0 Pneu-Atlas-pneumonia.gif_frame18.jpg 1 1
1 0 Pneu-Atlas-pneumonia.gif_frame24.jpg 1 1
2 0 Pneu-Atlas-pneumonia.gif_frame30.jpg 1 1
3 0 pneu-everyday.gif_frame45.jpg 1 1
4 0 pneu-everyday.gif_frame51.jpg 1 1
probability
0 -0.000012
1 -0.000022
2 -0.000303
3 -0.052166
4 -0.401968
number of files 1765
Accuracy for all predictions 0.9172804532577904
1365
Accuracy for covid pneu relular predictions 0.8967032967032967
###Markdown
Save predictions in csv files:
###Code
dfs = []
path_to_csv = "/Users/ninawiedemann/Desktop/Projects/covid19_pocus_ultrasound.nosync/pocovidnet/models/"
for filein in os.listdir(path_to_csv):
if filein[-3:]=="csv":
dfs.append(pd.read_csv(path_to_csv+filein))
one_df = pd.concat(dfs)
vid_name, frame_num, labels = [],[], []
label_dict = {"pne":1, "Pne":1, "Cov":0, "Reg":2}
for fn in one_df["Unnamed: 0"]:
parts = fn.split(".")
vid_name.append(parts[0])
labels.append(label_dict[parts[0][:3]])
if len(parts)==2:
frame_num.append(None)
elif len(parts)==3:
frame_num.append(int(parts[1][9:]))
classes = ["covid (0)", "pneumonia (1)", "healthy (2)"]
trans_df = pd.DataFrame()
trans_df["video"] = vid_name
trans_df["frame"] = frame_num
trans_df["label (0:cov, 1:pneu, 2:reg)"] = labels # [classes[l] for l in labels]
# add predictions
preds = np.array(one_df[["0","1","2"]])
sorted_preds = np.argsort(preds, axis=1)
trans_df["prediction (0:cov, 1:pneu, 2:reg)"] = sorted_preds[:,2] # [classes[l] for l in sorted_preds[:,2]]
trans_df["second_pred"] = sorted_preds[:,1]
trans_df["prob"] = np.max(preds, axis=1)
trans_df = trans_df.sort_values(by=["video", "frame"])
grouped = trans_df.groupby('video').agg({"prob":"mean", "label (0:cov, 1:pneu, 2:reg)":"first"})
grouped["preds"] = list(trans_df.groupby('video')["prediction (0:cov, 1:pneu, 2:reg)"].apply(list))
def most_frequent(List):
return max(set(List), key = List.count)
grouped["majority_vote"] = [most_frequent(val) for val in grouped["preds"].values]
gt_vid, preds_vid = (grouped["label (0:cov, 1:pneu, 2:reg)"].values, grouped["majority_vote"].values)
gt, preds = (trans_df["label (0:cov, 1:pneu, 2:reg)"].values, trans_df["prediction (0:cov, 1:pneu, 2:reg)"].values)
print("frame accuracy:", np.sum(gt==preds)/len(gt), "video accuracy", np.sum(gt_vid==preds_vid)/len(gt_vid))
grouped.to_csv("predictions.csv")
trans_df.to_csv("framewise_predictions.csv")
###Output
_____no_output_____
###Markdown
Old Covid-Net results
###Code
cm0 = np.array([[24., 12., 12.], [ 0., 28., 0.], [29., 4., 30.]])
cm1 = np.array([[ 0., 1., 48.],[ 0., 22., 0.],[ 0., 2., 109.]])
cm2 = np.array([[17., 5., 13.],[ 2., 24., 0.],[ 0., 0, 94.]])
cm3 = np.array([[30., 0., 0.],[ 0., 25., 0.],[ 3., 0, 85.]])
cm4 = np.array([[19., 0., 8.],[ 6., 25., 0.], [ 0., 0., 80.]])
# sensitivities
sens_reg = np.mean([0.5, 0, 0.486, 1.0, 0.704])
sens_pneu = np.mean([1.0, 1.0, 0.923, 1.0, 0.806])
sens_covid = np.mean([0.476, 0.982, 1.0, 0.966, 1.0])
# precisions
prec_reg = np.mean([0.453, 0, 0.895, 0.909, 0.76])
prec_pneu = np.mean([0.636, 0.88, 0.828, 1.0, 1.0])
prec_covid = np.mean([0.714, 0.694, 0.879, 1.0, 0.909])
accs_covidnet = [0.58992805, 0.719, 0.871, 0.979, 0.89855]
all_cms_cov_model = np.array([cm0, cm1, cm2, cm3, cm4])
print(all_cms_cov_model.shape)
###Output
_____no_output_____
###Markdown
Convert to latex tables
###Code
base_dir = "model_comparison"
class_map2 = {0:"COVID-19", 1:"Pneumonia", 2: "Healthy",3:"Uninformative"}
for model in ["encoding_4"]: # , "cam_4", "NAS_B_4"]: # ["vid_cam_3", "genesis_3"]: #
mean_table = pd.read_csv(os.path.join(base_dir, model+"_mean.csv"))
std_table = pd.read_csv(os.path.join(base_dir, model+"_std.csv"))
print("----------", model)
print(std_table)
for i, row in mean_table.iterrows():
std_row = std_table.loc[i] # std_table[std_table["Unnamed: 0"]=="covid"]
# if i==1:
# "& $", row["Accuracy"],"\\pm",std_row["Accuracy"],"$ &",
if i ==0:
print(row["Accuracy"], std_row["Accuracy"], row["Balanced"], std_row["Balanced"])
print("&", class_map2[i],
"& $", row["Recall"], "\\pm {\scriptstyle",std_row["Recall"],
"}$ & $", row["Precision"], "\\pm {\scriptstyle",std_row["Precision"],
"}$ & $", row["F1-score"], "\\pm {\scriptstyle",std_row["F1-score"],
"}$ & $", row["Specificity"], "\\pm {\scriptstyle",std_row["Specificity"],
"}$ & $",row["MCC"], "\\pm {\scriptstyle",std_row["MCC"], "} $ \\\\")
# WO standard deviation
# print("& row["Accuracy"],"&", class_map2[i],"&", row["Recall"],
# "&", row["Precision"], "&", row["F1-score"], "&", row["Specificity"], "&", row["MCC"], "\\\\")
base_dir = "model_comparison"
class_map2 = {0:"COVID-19", 1:"Pneumonia", 2: "Healthy"}
for model in ["frame_based_video_evaluation", "vid_based_video_evaluation"]:
mean_table = pd.read_csv(os.path.join(base_dir, model+".csv"))
print("----------", model)
for i, row in mean_table.iterrows():
std_row = std_table.loc[i] # std_table[std_table["Unnamed: 0"]=="covid"]
# if i==1:
# "& $", row["Accuracy"],"\\pm",std_row["Accuracy"],"$ &",
print(row["Accuracy"], row["Balanced"])
# WO standard deviation
print("&", class_map2[i],"&", row["recall"],
"&", row["precision"], "&", row["f1-score"], "&", row["Specificity"], "&", row["MCC"], "\\\\")
###Output
---------- frame_based_video_evaluation
0.95 0.93
& COVID-19 & 0.97 & 0.92 & 0.95 & 0.92 & 0.9 \\
0.95 0.93
& Pneumonia & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\
0.95 0.93
& Healthy & 0.82 & 0.93 & 0.87 & 0.98 & 0.84 \\
---------- vid_based_video_evaluation
0.87 0.88
& COVID-19 & 0.81 & 0.91 & 0.85 & 0.92 & 0.74 \\
0.87 0.88
& Pneumonia & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\
0.87 0.88
& Healthy & 0.82 & 0.67 & 0.74 & 0.88 & 0.66 \\
|
paper-figures/appendix_figure_8_relaysgd_vs_d2_on_star.ipynb | ###Markdown
Reach a fixed plateau
###Code
num_workers = 32
d = 10
# noise = 0.5
noise = 0
mu = 0.5
zetas = [0, .01, .1]
max_steps = [100, 500, 1000]
eval_intervals = [2, 11, 100]
plateau = 1e-6
seed = 0
warehouse = Warehouse()
algorithm_name = "Gossip"
algorithm = gossip
topology_name = "star"
topology = StarTopology(num_workers)
warehouse.clear("error", {"algorithm": algorithm_name})
for zeta2, eval_interval, num_steps in zip(zetas, eval_intervals, max_steps):
print(f"Tuning for {zeta2}")
task = RandomQuadraticsTask(num_workers, d=d, heterogeneity=zeta2, sgd_noise=noise, mu=mu, seed=seed)
steps, learning_rate = tuning.tune_plateau(start_lr=10, desired_plateau=plateau, task=task, algorithm=algorithm, topology=topology, max_steps=70000, num_test_points=1000)
if learning_rate is None:
continue
tags = {"algorithm": algorithm_name, "learning_rate": learning_rate, "zeta2": zeta2}
warehouse.clear("error", tags)
errors = {}
for iterate in algorithm(task, topology, learning_rate, num_steps):
if iterate.step % eval_interval == 0:
error = task.error(iterate.state).item() #/ initial_error
errors[iterate.step] = error
# if error - errors.get(iterate.step - eval_interval * 10, 1000000) > -1e-6:
# print("no improvement after", iterate.step)
# break
if error < 1e-12:
break
warehouse.log_metric(
"error", {"value": error, "step": iterate.step}, tags
)
print(error)
algorithm_name = "D2"
algorithm = d2
topology_name = "star"
topology = StarTopology(num_workers)
warehouse.clear("error", {"algorithm": algorithm_name})
for zeta2, eval_interval, num_steps in zip(zetas, eval_intervals, max_steps):
eval_interval = 1
task = RandomQuadraticsTask(num_workers, d=d, heterogeneity=zeta2, sgd_noise=noise, mu=mu, seed=seed)
tags = {"algorithm": algorithm_name, "learning_rate": learning_rate, "zeta2": zeta2}
steps, learning_rate = tuning.tune_fastest(start_lr=10, target_quality=plateau, task=task, algorithm=algorithm, topology=topology, max_steps=10000, num_test_points=1000)
print(steps, learning_rate)
warehouse.clear("error", tags)
errors = {}
for iterate in algorithm(task, topology, learning_rate, num_steps):
if iterate.step % eval_interval == 0:
error = task.error(iterate.state).item() #/ initial_error
errors[iterate.step] = error
# if error - errors.get(iterate.step - eval_interval * 10, 1000000) > -1e-6:
# print("no improvement after", iterate.step)
# break
if error < 1e-12:
break
warehouse.log_metric(
"error", {"value": error, "step": iterate.step}, tags
)
print(error)
algorithm_name = "Gradient tracking"
algorithm = gradient_tracking
topology_name = "star"
topology = StarTopology(num_workers)
warehouse.clear("error", {"algorithm": algorithm_name})
for zeta2, eval_interval, num_steps in zip(zetas, eval_intervals, max_steps):
eval_interval = 1
task = RandomQuadraticsTask(num_workers, d=d, heterogeneity=zeta2, sgd_noise=noise, mu=mu, seed=seed)
tags = {"algorithm": algorithm_name, "learning_rate": learning_rate, "zeta2": zeta2}
steps, learning_rate = tuning.tune_fastest(start_lr=10, target_quality=plateau, task=task, algorithm=algorithm, topology=topology, max_steps=10000, num_test_points=1000)
print(steps, learning_rate)
warehouse.clear("error", tags)
errors = {}
for iterate in algorithm(task, topology, learning_rate, num_steps):
if iterate.step % eval_interval == 0:
error = task.error(iterate.state).item() #/ initial_error
errors[iterate.step] = error
# if error - errors.get(iterate.step - eval_interval * 10, 1000000) > -1e-6:
# print("no improvement after", iterate.step)
# break
if error < 1e-12:
break
warehouse.log_metric(
"error", {"value": error, "step": iterate.step}, tags
)
print(error)
algorithm_name = "RelaySum/Model"
algorithm = relaysum_model
topology_name = "chain"
topology = StarTopology(num_workers)
warehouse.clear("error", {"algorithm": algorithm_name})
for zeta2, eval_interval, num_steps in zip(zetas, eval_intervals, max_steps):
eval_interval = 1
task = RandomQuadraticsTask(num_workers, d=d, heterogeneity=zeta2, sgd_noise=noise, mu=mu, seed=seed)
steps, learning_rate = tuning.tune_fastest(start_lr=10, target_quality=plateau, task=task, algorithm=algorithm, topology=topology, max_steps=10000, num_test_points=1000)
print(steps, learning_rate)
tags = {"algorithm": algorithm_name, "learning_rate": learning_rate, "zeta2": zeta2}
warehouse.clear("error", tags)
errors = {}
for iterate in algorithm(task, topology, learning_rate, num_steps):
if iterate.step % eval_interval == 0:
error = task.error(iterate.state).item() #/ initial_error
errors[iterate.step] = error
# if error - errors.get(iterate.step - eval_interval * 10, 1000000) > -1e-6:
# print("no improvement after", iterate.step)
# break
if error < 1e-12:
break
warehouse.log_metric(
"error", {"value": error, "step": iterate.step}, tags
)
print(error)
import seaborn as sns
sns.set_style("whitegrid")
from matplotlib import pyplot as plt
import matplotlib
matplotlib.rcParams['text.usetex'] = True
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Times"],
'text.latex.preamble' : r'\usepackage{amsmath}\usepackage{amssymb}\usepackage{newtxmath}'
})
df = warehouse.query("error")
df["zeta2name"] = df["zeta2"].replace({0: "0 (equal optimum)", .01: "0.01 (heterogeneous)", .1: "0.1 (very heterogeneous)"})
df["algoname"] = df.algorithm.replace({"D2": r"$\text{D}^2$", "RelaySum/Model": "RelaySGD"})
g = sns.FacetGrid(hue="algoname", col="zeta2name", data = df, height=2.5, sharex=False);
g.map(plt.plot, "step", "value")
for ax in g.axes[0]:
ax.axhline(1e-6, ls='--', c='gray', alpha=0.3)
g.set_titles(r"$\zeta^2 =$ {col_name}")
g.set(yscale="log");
# g.set_ylabels(r"Mean sq. distance to optimum")
g.set_ylabels(r"Suboptimality $f(\bar{\mathbf{x}}) - f(\mathbf{x}^\star)$")
g.set(xlabel="Steps")
g.tight_layout()
g.set(ylim=[1e-9, 1])
g.add_legend(title="");
g.savefig("effect_of_heterogeneity_fixed_saturation_star.pdf", bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Trash bin
###Code
algorithm_name = "RelaySum/Grad"
algorithm = relaysum_grad
topology_name = "chain"
topology = StarTopology(num_workers)
seed = 3
warehouse.clear("error", {"algorithm": algorithm_name})
for zeta2, eval_interval, num_steps in zip(zetas, eval_intervals, max_steps):
task = RandomQuadraticsTask(num_workers, d=d, heterogeneity=zeta2, sgd_noise=noise, mu=mu, seed=seed)
steps, learning_rate = tuning.tune_plateau(start_lr=10, desired_plateau=plateau, task=task, algorithm=algorithm, topology=topology, max_steps=10000, num_test_points=1000)
print(steps, learning_rate)
tags = {"algorithm": algorithm_name, "learning_rate": learning_rate, "zeta2": zeta2}
warehouse.clear("error", tags)
errors = {}
for iterate in algorithm(task, topology, learning_rate, num_steps):
if iterate.step % eval_interval == 0:
error = task.error(iterate.state).item() #/ initial_error
errors[iterate.step] = error
# if error - errors.get(iterate.step - eval_interval * 10, 1000000) > -1e-6:
# print("no improvement after", iterate.step)
# break
if error < 1e-12:
break
warehouse.log_metric(
"error", {"value": error, "step": iterate.step}, tags
)
print(error)
###Output
17 0.60546875
4.4001555867632207e-14
17 0.5859375
7.90776377712632e-09
17 0.5859375
7.908125415623246e-08
|
P1 - Surya Vamsi .ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
###Code
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in an Image
###Code
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
###Output
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
###Markdown
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
###Code
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def find_lanes(lines, img): # Obtain those lane lines which contain the points of lines > than a given slope threshold
lanes = [[],[]]
for line in lines:
x1, y1, x2, y2 = line[0]
m = (y2 - y1)/(x2 - x1) # slope
b = y1 - m*x1 # y-intercept
if m > 0.3:
lanes[0].append([x1, y1, x2, y2]) # positive sloped lines
elif m < -0.3:
lanes[1].append([x1, y1, x2, y2]) # negative sloped lines
return lanes
def draw_solid_line(img, lane, color, thickness): # This is for drawing a solid, single line on the image's lane markings.
x1, y1, x2, y2 = [int(x) for x in lane]
lane_array = np.array([(x1, y1), (x2, y2)], dtype=np.int32)
[vx, vy, x, y] = cv2.fitLine(lane_array,cv2.DIST_L2,0,0.01,0.01)
slope = vy / vx
intercept = y - (slope * x)
y1 = int(img.shape[0])
y2 = int(img.shape[0] * 0.6)
x1 = int((y1 - intercept) / slope)
x2 = int((y2 - intercept) / slope) # Using slope and intercept, we obtain the line's end points for plotting.
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def draw_lines(img, lines, color=[255, 0, 0], thickness=4): # modified draw_lines function.
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for lane_lines in find_lanes(lines, img):
mean_lane = np.mean(lane_lines, axis=0)
draw_solid_line(img, mean_lane, color, thickness) # call the draw_solid_line function.
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): # part variable is for each part of this project
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
###Output
_____no_output_____
###Markdown
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
###Code
import os
os.listdir("test_images/")
###Output
_____no_output_____
###Markdown
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
###Code
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# My work starts here.
def processing(image):
img_copy = np.copy(image) # Save image if needed in future.
img_gray = grayscale(image) # convert to grayscale.
gb_image = gaussian_blur(img_gray, kernel_size=5) # remove gaussian blur.
edges = canny(gb_image, low_threshold=70, high_threshold=170) # obtain canny edges.
#plt.imshow(edges) # print out if verification needed.
# 540 x 960 - Shape of given image
lb = (0, 540)
c = (480, 315) # define vertices. left bottom, center, right bottom.
rb = (960, 540)
vertices = np.array([[lb, rb, c]]) # Choosing a triangle in the bottom half of the image.
masked_image = region_of_interest(edges, vertices) # select only the region we intend to look into for drawing lines.
# Now let's get some hough lines on the image.
lines = hough_lines(masked_image, rho=1, theta=np.pi/180, threshold=15, min_line_len=10, max_line_gap=20)
#plt.imshow(lines)
#plt.show()
weighted_image = weighted_img(lines, img_copy, α=0.8, β=1., γ=0.) # Average extrapolated line segments.
#plt.imshow(weighted_image)
return weighted_image
###Output
_____no_output_____
###Markdown
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# Main Stuff.
result = processing(image) # Perform all procesings of the passed image(s)
return result
###Output
_____no_output_____
###Markdown
Let's try the one with the solid white lane on the right first ...
###Code
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
Moviepy - Building video test_videos_output/solidWhiteRight.mp4.
Moviepy - Writing video test_videos_output/solidWhiteRight.mp4
###Markdown
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
###Code
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
###Output
_____no_output_____
###Markdown
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
###Code
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
###Output
_____no_output_____
###Markdown
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
###Code
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
###Output
_____no_output_____ |
tracking_oregons_wildlife.ipynb | ###Markdown
Oregon Wildlife Image Classification Final Project Flatiron School - Washington DC Data Science Fellowship J. Mark Daniels, PhD Mount and Authorize Google Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Binary Classification - Bobcat Import Libraries for Convolutional Neural Network (CNN)
###Code
from keras.layers import Dense, GlobalAveragePooling2D
from keras.callbacks import ModelCheckpoint
from keras.applications import inception_v3
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.models import load_model
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import itertools
from keras import models
from keras.models import Model
from keras import layers
from sklearn.metrics import confusion_matrix, f1_score
np.random.seed(123)
###Output
Using TensorFlow backend.
###Markdown
Prepare Data Import, Resize, and Rescale Images
###Code
def image_data_gen(origin_train, origin_test):
data_tr = ImageDataGenerator(rescale=1./255).flow_from_directory(
'/content/drive/My Drive/final_project_data/bobcat_cougar_data/train',
target_size=(224, 224),
batch_size=135,
seed=123)
data_te = ImageDataGenerator(rescale=1./255).flow_from_directory(
'/content/drive/My Drive/final_project_data/bobcat_cougar_data/test',
target_size=(224, 224),
batch_size=137,
seed=123)
return(data_tr, data_te)
data_tr, data_te = image_data_gen('/content/drive/My Drive/final_project_data/bobcat_cougar_data/train',
'/content/drive/My Drive/final_project_data/bobcat_cougar_data/test')
###Output
Found 680 images belonging to 2 classes.
Found 685 images belonging to 2 classes.
###Markdown
Split Images and Labels into Arrays
###Code
# images_tr, labels_tr = next(data_tr)
# images_te, labels_te = next(data_te)
# images = np.concatenate((images_tr, images_te))
# labels = np.concatenate((labels_tr[:,0], labels_te[:,0]))
###Output
_____no_output_____
###Markdown
Convolutional Neural Network (CNN) Create Model
###Code
cnn = models.Sequential()
cnn.add(layers.Conv2D(64, (1, 1), activation='relu', input_shape=(224, 224, 3)))
cnn.add(layers.BatchNormalization())
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Conv2D(64, (3, 3), activation='relu'))
cnn.add(layers.BatchNormalization())
# 64 bias parameters
# 64 * (3 * 3 * 3) weight parametrs
# Output is 64*224*224
cnn.add(layers.MaxPooling2D((2, 2)))
#Output is 64*112*112
cnn.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)))
cnn.add(layers.BatchNormalization())
# 32 bias parameters
#32 * (3*3*64)
#Output is 32*112*112
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Flatten())
cnn.add(layers.Dense(32, activation='relu'))
cnn.add(layers.Dense(1, activation='sigmoid'))
cnn.compile(loss='binary_crossentropy',
optimizer="adam",
metrics=['acc'])
###Output
_____no_output_____
###Markdown
Model Summary
###Code
print(cnn.summary())
###Output
_____no_output_____
###Markdown
Train Model
###Code
cnn1 = cnn.fit_generator(data_tr,
steps_per_epoch=5,
epochs=100, # 100
validation_data=data_te,
validation_steps=10)
###Output
_____no_output_____
###Markdown
Save Loaded CNN Model
###Code
cnn.save('/content/drive/My Drive/final_project_data/models/cnn1.h5')
###Output
_____no_output_____
###Markdown
Load Saved CNN Model
###Code
# from numpy import loadtxt
# from keras.models import load_model
# cnn.load_weights('/Users/j.markdaniels/Desktop/flatiron_final_project/cnn_first_draft.h5')
# cnn.summary()
###Output
_____no_output_____
###Markdown
Evaluate CNN Model Performance
###Code
from keras.callbacks import History
history = History()
hist_cnn = cnn1.history
loss_values = hist_cnn['loss']
val_loss_values = hist_cnn['val_loss']
acc_values = hist_cnn['acc']
val_acc_values = hist_cnn['val_acc']
epochs = range(1, len(loss_values) + 1)
plt.figure(figsize=(15, 4))
plt.subplot(121)
plt.plot(epochs, loss_values, 'g.', label='Training loss')
plt.plot(epochs, val_loss_values, 'g', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(epochs, acc_values, 'r.', label='Training acc')
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Accuracy
###Code
results_train = cnn.evaluate(X_train, y_train)
results_test = cnn.evaluate(X_test, y_test)
print(results_train, results_test)
###Output
435/435 [==============================] - 1s 3ms/step
136/136 [==============================] - 0s 2ms/step
[5.8526789733401404e-05, 1.0] [0.8348332152647131, 0.8308823529411765]
###Markdown
Confusion Matrix
###Code
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
predictions_transfer = cnn.predict(X_test)
predictions_transfer = np.around(predictions_transfer)
plt.figure()
plot_confusion_matrix(confusion_matrix(y_test, predictions_transfer), classes=['bobcat', 'not bobcat'], normalize=False,
title='Confusion matrix - ImagenetV3')
###Output
Confusion matrix, without normalization
[[57 9]
[14 56]]
###Markdown
F1 Score
###Code
f1_score(y_test, predictions_transfer)
###Output
_____no_output_____
###Markdown
ROC Graph
###Code
import numpy as np
import sklearn
from sklearn import metrics
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
roc_predictions_transfer = cnn.predict(X_test)
fpr, tpr = roc_curve(y_test, roc_predictions_transfer)[:2]
auc_cnn = roc_auc_score(y_test, roc_predictions_transfer)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % auc_cnn)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
fpr.shape
tpr.shape
roc_predictions_transfer
###Output
_____no_output_____
###Markdown
Convolutional Neural Network with Inception (CNN-i) Create Model
###Code
imagenet = inception_v3.InceptionV3(weights='imagenet', include_top=False)
imagenet_new = imagenet.output
cnn_i = models.Sequential()
cnn_i.add(imagenet)
cnn_i.add(GlobalAveragePooling2D())
cnn_i.add(Dense(1024, activation='relu'))
cnn_i.add(Dense(1024, activation='relu')) # dense layer 2
cnn_i.add(Dense(512, activation='relu')) # dense layer 3
# final layer with sigmoid activation
cnn_i.add(Dense(1, activation='sigmoid'))
for i, layer in enumerate(imagenet.layers):
print(i, layer.name, layer.trainable)
for i, layer in enumerate(cnn_i.layers):
print(i, layer.name, layer.trainable)
for layer in cnn_i.layers[:1]:
layer.trainable = False
for i, layer in enumerate(cnn_i.layers):
print(i, layer.name, layer.trainable)
###Output
0 inception_v3 False
1 global_average_pooling2d_1 True
2 dense_3 True
3 dense_4 True
4 dense_5 True
5 dense_6 True
###Markdown
Model Summary
###Code
print(cnn_i.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inception_v3 (Model) (None, None, None, 2048) 21802784
_________________________________________________________________
global_average_pooling2d_1 ( (None, 2048) 0
_________________________________________________________________
dense_3 (Dense) (None, 1024) 2098176
_________________________________________________________________
dense_4 (Dense) (None, 1024) 1049600
_________________________________________________________________
dense_5 (Dense) (None, 512) 524800
_________________________________________________________________
dense_6 (Dense) (None, 1) 513
=================================================================
Total params: 25,475,873
Trainable params: 3,673,089
Non-trainable params: 21,802,784
_________________________________________________________________
None
###Markdown
Train Model
###Code
checkpoint_cnn_i = ModelCheckpoint(filepath='/content/drive/My Drive/final_project_data/models/checkpoints_cnn_i.h5',
monitor='val_acc',
verbose=1,
save_best_only=False)
cnn_i.compile(optimizer='Adam', loss='binary_crossentropy',
metrics=['accuracy'])
# step_size_train=train_generator.n//train_generator.batch_size
# cnn_i_1= cnn_i.fit(X_train,
# y_train,
# epochs=100, # 100
# batch_size=50, # 50
# validation_data=(X_val, y_val))
cnn1_i = cnn_i.fit_generator(data_tr,
steps_per_epoch=5,
epochs=100, # 100
validation_data=data_te,
validation_steps=10,
callbacks=[checkpoint_cnn_i])
###Output
Train on 435 samples, validate on 109 samples
Epoch 1/100
435/435 [==============================] - 10s 23ms/step - loss: 1.0269 - acc: 0.5747 - val_loss: 0.3953 - val_acc: 0.9266
Epoch 2/100
435/435 [==============================] - 1s 3ms/step - loss: 0.4176 - acc: 0.8299 - val_loss: 0.2700 - val_acc: 0.9358
Epoch 3/100
435/435 [==============================] - 1s 3ms/step - loss: 0.3397 - acc: 0.8414 - val_loss: 0.3118 - val_acc: 0.9174
Epoch 4/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1613 - acc: 0.9241 - val_loss: 0.2691 - val_acc: 0.9358
Epoch 5/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1484 - acc: 0.9540 - val_loss: 0.3245 - val_acc: 0.9450
Epoch 6/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1133 - acc: 0.9563 - val_loss: 0.3448 - val_acc: 0.9541
Epoch 7/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0916 - acc: 0.9793 - val_loss: 0.3375 - val_acc: 0.9358
Epoch 8/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0539 - acc: 0.9770 - val_loss: 0.4716 - val_acc: 0.9358
Epoch 9/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0992 - acc: 0.9678 - val_loss: 0.4280 - val_acc: 0.9541
Epoch 10/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1342 - acc: 0.9425 - val_loss: 0.4216 - val_acc: 0.9266
Epoch 11/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0726 - acc: 0.9747 - val_loss: 0.4269 - val_acc: 0.9450
Epoch 12/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0388 - acc: 0.9862 - val_loss: 0.5334 - val_acc: 0.9450
Epoch 13/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0299 - acc: 0.9931 - val_loss: 0.5436 - val_acc: 0.9358
Epoch 14/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0353 - acc: 0.9885 - val_loss: 0.5664 - val_acc: 0.9450
Epoch 15/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0301 - acc: 0.9885 - val_loss: 0.5855 - val_acc: 0.9450
Epoch 16/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0424 - acc: 0.9816 - val_loss: 0.5539 - val_acc: 0.9450
Epoch 17/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1103 - acc: 0.9632 - val_loss: 0.4896 - val_acc: 0.9450
Epoch 18/100
435/435 [==============================] - 1s 3ms/step - loss: 0.2358 - acc: 0.9034 - val_loss: 0.3089 - val_acc: 0.9450
Epoch 19/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0997 - acc: 0.9563 - val_loss: 0.4570 - val_acc: 0.9174
Epoch 20/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1917 - acc: 0.9218 - val_loss: 0.3969 - val_acc: 0.9174
Epoch 21/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1236 - acc: 0.9494 - val_loss: 0.3532 - val_acc: 0.9450
Epoch 22/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0331 - acc: 0.9908 - val_loss: 0.4376 - val_acc: 0.9266
Epoch 23/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0249 - acc: 0.9954 - val_loss: 0.6295 - val_acc: 0.9541
Epoch 24/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0284 - acc: 0.9931 - val_loss: 0.5239 - val_acc: 0.9450
Epoch 25/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0106 - acc: 0.9977 - val_loss: 0.5523 - val_acc: 0.9541
Epoch 26/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0145 - acc: 0.9954 - val_loss: 0.5563 - val_acc: 0.9541
Epoch 27/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0278 - acc: 0.9839 - val_loss: 0.5548 - val_acc: 0.9450
Epoch 28/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0134 - acc: 0.9954 - val_loss: 0.4628 - val_acc: 0.9358
Epoch 29/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0154 - acc: 0.9954 - val_loss: 0.5199 - val_acc: 0.9450
Epoch 30/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0092 - acc: 0.9977 - val_loss: 0.5126 - val_acc: 0.9358
Epoch 31/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0019 - acc: 1.0000 - val_loss: 0.5664 - val_acc: 0.9358
Epoch 32/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0129 - acc: 0.9931 - val_loss: 0.6519 - val_acc: 0.9358
Epoch 33/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0262 - acc: 0.9954 - val_loss: 0.5932 - val_acc: 0.9358
Epoch 34/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0161 - acc: 0.9954 - val_loss: 0.6171 - val_acc: 0.9450
Epoch 35/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0074 - acc: 0.9977 - val_loss: 0.6379 - val_acc: 0.9541
Epoch 36/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0295 - acc: 0.9908 - val_loss: 0.6004 - val_acc: 0.9266
Epoch 37/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0127 - acc: 0.9954 - val_loss: 0.4891 - val_acc: 0.9541
Epoch 38/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0127 - acc: 0.9977 - val_loss: 0.5549 - val_acc: 0.9450
Epoch 39/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0658 - acc: 0.9816 - val_loss: 0.3759 - val_acc: 0.9541
Epoch 40/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0528 - acc: 0.9816 - val_loss: 0.4661 - val_acc: 0.9174
Epoch 41/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1130 - acc: 0.9494 - val_loss: 0.5442 - val_acc: 0.9450
Epoch 42/100
435/435 [==============================] - 1s 3ms/step - loss: 0.1749 - acc: 0.9402 - val_loss: 0.2759 - val_acc: 0.9541
Epoch 43/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0177 - acc: 0.9954 - val_loss: 0.3779 - val_acc: 0.9450
Epoch 44/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0322 - acc: 0.9908 - val_loss: 0.5886 - val_acc: 0.9266
Epoch 45/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0266 - acc: 0.9931 - val_loss: 0.5635 - val_acc: 0.9450
Epoch 46/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0047 - acc: 0.9977 - val_loss: 0.5754 - val_acc: 0.9358
Epoch 47/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0301 - acc: 0.9908 - val_loss: 0.5078 - val_acc: 0.9450
Epoch 48/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0341 - acc: 0.9862 - val_loss: 0.4414 - val_acc: 0.9450
Epoch 49/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0266 - acc: 0.9885 - val_loss: 0.4096 - val_acc: 0.9266
Epoch 50/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0196 - acc: 0.9977 - val_loss: 0.4712 - val_acc: 0.9450
Epoch 51/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0061 - acc: 0.9977 - val_loss: 0.5065 - val_acc: 0.9450
Epoch 52/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0161 - acc: 0.9908 - val_loss: 0.5109 - val_acc: 0.9450
Epoch 53/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0198 - acc: 0.9954 - val_loss: 0.5810 - val_acc: 0.9450
Epoch 54/100
435/435 [==============================] - 1s 3ms/step - loss: 8.3675e-04 - acc: 1.0000 - val_loss: 0.5374 - val_acc: 0.9174
Epoch 55/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0222 - acc: 0.9908 - val_loss: 0.5448 - val_acc: 0.9450
Epoch 56/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0094 - acc: 0.9977 - val_loss: 0.4371 - val_acc: 0.9450
Epoch 57/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0070 - acc: 0.9977 - val_loss: 0.4755 - val_acc: 0.9450
Epoch 58/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0147 - acc: 0.9977 - val_loss: 0.4473 - val_acc: 0.9450
Epoch 59/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0013 - acc: 1.0000 - val_loss: 0.5412 - val_acc: 0.9450
Epoch 60/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0096 - acc: 0.9977 - val_loss: 0.5469 - val_acc: 0.9266
Epoch 61/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0036 - acc: 0.9977 - val_loss: 0.5284 - val_acc: 0.9450
Epoch 62/100
435/435 [==============================] - 1s 3ms/step - loss: 7.0041e-04 - acc: 1.0000 - val_loss: 0.5332 - val_acc: 0.9541
Epoch 63/100
435/435 [==============================] - 1s 3ms/step - loss: 2.6198e-04 - acc: 1.0000 - val_loss: 0.5498 - val_acc: 0.9541
Epoch 64/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0122 - acc: 0.9977 - val_loss: 0.5532 - val_acc: 0.9266
Epoch 65/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0052 - acc: 0.9977 - val_loss: 0.5397 - val_acc: 0.9266
Epoch 66/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0227 - acc: 0.9885 - val_loss: 0.4773 - val_acc: 0.9450
Epoch 67/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0267 - acc: 0.9885 - val_loss: 0.4857 - val_acc: 0.9266
Epoch 68/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0222 - acc: 0.9908 - val_loss: 0.3738 - val_acc: 0.9266
Epoch 69/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0057 - acc: 0.9977 - val_loss: 0.4471 - val_acc: 0.9450
Epoch 70/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0038 - acc: 0.9977 - val_loss: 0.5811 - val_acc: 0.9450
Epoch 71/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0038 - acc: 1.0000 - val_loss: 0.5635 - val_acc: 0.9266
Epoch 72/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0043 - acc: 0.9977 - val_loss: 0.6094 - val_acc: 0.9358
Epoch 73/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0272 - acc: 0.9931 - val_loss: 0.6073 - val_acc: 0.9174
Epoch 74/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0745 - acc: 0.9724 - val_loss: 0.5852 - val_acc: 0.9266
Epoch 75/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0260 - acc: 0.9931 - val_loss: 0.5071 - val_acc: 0.9450
Epoch 76/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0391 - acc: 0.9793 - val_loss: 0.7018 - val_acc: 0.9083
Epoch 77/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0302 - acc: 0.9885 - val_loss: 0.6897 - val_acc: 0.9266
Epoch 78/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0384 - acc: 0.9839 - val_loss: 0.5334 - val_acc: 0.9174
Epoch 79/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0299 - acc: 0.9885 - val_loss: 0.5768 - val_acc: 0.9541
Epoch 80/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0213 - acc: 0.9931 - val_loss: 0.5759 - val_acc: 0.9358
Epoch 81/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0428 - acc: 0.9862 - val_loss: 0.5342 - val_acc: 0.9450
Epoch 82/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0317 - acc: 0.9885 - val_loss: 0.4683 - val_acc: 0.9266
Epoch 83/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0093 - acc: 1.0000 - val_loss: 0.5148 - val_acc: 0.9266
Epoch 84/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0059 - acc: 1.0000 - val_loss: 0.5741 - val_acc: 0.9174
Epoch 85/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0015 - acc: 1.0000 - val_loss: 0.5685 - val_acc: 0.9541
Epoch 86/100
435/435 [==============================] - 1s 3ms/step - loss: 1.7765e-04 - acc: 1.0000 - val_loss: 0.5961 - val_acc: 0.9450
Epoch 87/100
435/435 [==============================] - 1s 3ms/step - loss: 9.4052e-05 - acc: 1.0000 - val_loss: 0.6117 - val_acc: 0.9450
Epoch 88/100
435/435 [==============================] - 1s 3ms/step - loss: 1.9702e-04 - acc: 1.0000 - val_loss: 0.6306 - val_acc: 0.9450
Epoch 89/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0142 - acc: 0.9954 - val_loss: 0.8231 - val_acc: 0.9174
Epoch 90/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0357 - acc: 0.9862 - val_loss: 0.5361 - val_acc: 0.9358
Epoch 91/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0079 - acc: 0.9977 - val_loss: 0.4714 - val_acc: 0.9174
Epoch 92/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0162 - acc: 0.9885 - val_loss: 0.4583 - val_acc: 0.9266
Epoch 93/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0023 - acc: 1.0000 - val_loss: 0.4861 - val_acc: 0.9358
Epoch 94/100
435/435 [==============================] - 1s 3ms/step - loss: 3.6037e-04 - acc: 1.0000 - val_loss: 0.5091 - val_acc: 0.9358
Epoch 95/100
435/435 [==============================] - 1s 3ms/step - loss: 5.2150e-04 - acc: 1.0000 - val_loss: 0.5321 - val_acc: 0.9358
Epoch 96/100
435/435 [==============================] - 1s 3ms/step - loss: 9.1561e-04 - acc: 1.0000 - val_loss: 0.5584 - val_acc: 0.9358
Epoch 97/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0037 - acc: 1.0000 - val_loss: 0.5819 - val_acc: 0.9541
Epoch 98/100
435/435 [==============================] - 1s 3ms/step - loss: 9.5983e-04 - acc: 1.0000 - val_loss: 0.6046 - val_acc: 0.9358
Epoch 99/100
435/435 [==============================] - 1s 3ms/step - loss: 2.0248e-04 - acc: 1.0000 - val_loss: 0.6080 - val_acc: 0.9266
Epoch 100/100
435/435 [==============================] - 1s 3ms/step - loss: 0.0136 - acc: 0.9977 - val_loss: 0.7036 - val_acc: 0.9266
###Markdown
Save Loaded CNN-i Model
###Code
cnn_i.save('/content/drive/My Drive/final_project_data/models/cnn_i.h5')
###Output
_____no_output_____
###Markdown
Load Saved CNN-i Model
###Code
# from numpy import loadtxt
# from keras.models import load_model
# model = load_model('/Users/j.markdaniels/iCloud Drive (Archive)/Desktop/Flatiron/projects/Final_Project/final_project/flatiron_final_project/cnn_i.h5')
# cnn_i.summary()
###Output
_____no_output_____
###Markdown
Evaluate CNN-i Model Performance
###Code
hist_cnn_i = cnn_i_1.history
loss_values = hist_cnn_i['loss']
val_loss_values = hist_cnn_i['val_loss']
acc_values = hist_cnn_i['acc']
val_acc_values = hist_cnn_i['val_acc']
epochs = range(1, len(loss_values) + 1)
plt.figure(figsize=(15, 4))
plt.subplot(121)
plt.plot(epochs, loss_values, 'g.', label='Training loss')
plt.plot(epochs, val_loss_values, 'g', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(epochs, acc_values, 'r.', label='Training acc')
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Accuracy
###Code
results_train = cnn_i.evaluate(X_train, y_train)
results_test = cnn_i.evaluate(X_test, y_test)
print(results_train, results_test)
###Output
435/435 [==============================] - 3s 7ms/step
136/136 [==============================] - 1s 7ms/step
[0.5452979244377422, 0.9379310354419138] [0.5491433634477503, 0.9191176470588235]
###Markdown
Confusion Matrix
###Code
predictions_transfer_cnn_i = cnn_i.predict(X_test)
predictions_transfer_cnn_i = np.around(predictions_transfer)
plt.figure()
plot_confusion_matrix(confusion_matrix(y_test, predictions_transfer_cnn_i), classes=['bobcat', 'not bobcat'], normalize=False,
title='Confusion matrix - ImagenetV3')
###Output
Confusion matrix, without normalization
[[57 9]
[14 56]]
###Markdown
F1 Score
###Code
predictions_transfer_cnn_i = cnn_i.predict(X_test)
predictions_transfer_cnn_i = np.around(predictions_transfer)
f1_score(y_test, predictions_transfer_cnn_i)
###Output
_____no_output_____
###Markdown
ROC Graph
###Code
import numpy as np
import sklearn
from sklearn import metrics
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
roc_predictions_transfer = cnn.predict(X_test)
fpr, tpr = roc_curve(y_test, roc_predictions_transfer)[:2]
auc_cnn_i = roc_auc_score(y_test, roc_predictions_transfer)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % auc_cnn)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
20-class Wildlife Classification Import Libraries for Multiclass CNN with Inception
###Code
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import os
import numpy as np
import pandas as pd
import itertools
import seaborn as sns
import functools
from keras import models
from keras import layers
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from sklearn.metrics import confusion_matrix, f1_score
from sklearn.model_selection import train_test_split
from shutil import copy2
from keras.applications import inception_v3
from keras.layers import Dense, GlobalAveragePooling2D
from sklearn.metrics import confusion_matrix, f1_score
from keras.models import Model
from sklearn.model_selection import train_test_split
from keras.metrics import top_k_categorical_accuracy
from keras.callbacks import ModelCheckpoint
np.random.seed(123)
###Output
_____no_output_____
###Markdown
Prepare Data Import, Resize, and Rescale Images
###Code
def multi_image_data_gen(origin_train, origin_test):
multi_data_tr = ImageDataGenerator(rescale=1./255).flow_from_directory(
origin_train,
target_size=(224, 224),
batch_size=100,
class_mode='categorical',
seed=123)
multi_data_te = ImageDataGenerator(rescale=1./255).flow_from_directory(
origin_test,
target_size=(224, 224),
batch_size=2776,
class_mode='categorical',
seed=123)
return(multi_data_tr, multi_data_te)
multi_data_tr, multi_data_te = multi_image_data_gen('/content/drive/My Drive/final_project_data/multiclass/train',
'/content/drive/My Drive/final_project_data/multiclass/test')
###Output
Found 11179 images belonging to 20 classes.
Found 2776 images belonging to 20 classes.
###Markdown
Split Images and Labels into Arrays
###Code
multi_images_tr, multi_labels_tr = next(multi_data_tr)
multi_images_te, multi_labels_te = next(multi_data_te)
###Output
_____no_output_____
###Markdown
Multi-class CNN-i Model Create Model
###Code
imagenet = inception_v3.InceptionV3(weights='imagenet', include_top=False)
imagenet_new = imagenet.output
multi_class_model = models.Sequential()
multi_class_model.add(imagenet)
multi_class_model.add(GlobalAveragePooling2D())
multi_class_model.add(Dense(1024, activation='relu'))
multi_class_model.add(Dense(1024, activation='relu')) # dense layer 2
multi_class_model.add(Dense(512, activation='relu')) # dense layer 3
# final layer with softmax activation
multi_class_model.add(Dense(20, activation='softmax'))
for i, layer in enumerate(imagenet.layers):
print(i, layer.name, layer.trainable)
for i, layer in enumerate(multi_class_model.layers):
print(i, layer.name, layer.trainable)
for layer in multi_class_model.layers[:1]:
layer.trainable = False
for i, layer in enumerate(multi_class_model.layers):
print(i, layer.name, layer.trainable)
###Output
0 inception_v3 False
1 global_average_pooling2d_1 True
2 dense_3 True
3 dense_4 True
4 dense_5 True
5 dense_6 True
###Markdown
Model Summary
###Code
print(multi_class_model.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
inception_v3 (Model) (None, None, None, 2048) 21802784
_________________________________________________________________
global_average_pooling2d_1 ( (None, 2048) 0
_________________________________________________________________
dense_3 (Dense) (None, 1024) 2098176
_________________________________________________________________
dense_4 (Dense) (None, 1024) 1049600
_________________________________________________________________
dense_5 (Dense) (None, 512) 524800
_________________________________________________________________
dense_6 (Dense) (None, 20) 10260
=================================================================
Total params: 25,485,620
Trainable params: 3,682,836
Non-trainable params: 21,802,784
_________________________________________________________________
None
###Markdown
Train Model
###Code
top3_acc = functools.partial(top_k_categorical_accuracy, k=3)
top3_acc.__name__ = 'top3_acc'
multi_class_model.compile(
optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy', top3_acc])
checkpoint = ModelCheckpoint(filepath='/content/drive/My Drive/final_project_data/models/checkpoints.h5',
monitor='val_acc',
verbose=1,
save_best_only=False)
multi_class_model_1 = multi_class_model.fit_generator(multi_data_tr,
steps_per_epoch=5,
epochs=25,
validation_data=multi_data_te,
validation_steps=10,
callbacks=[checkpoint])
###Output
Epoch 1/25
1/5 [=====>........................] - ETA: 3:38 - loss: 3.0708 - acc: 0.0100 - top3_acc: 0.0600
###Markdown
Save Loaded Multi-class Model
###Code
multi_class_model.save(
'/content/drive/My Drive/final_project_data/models/multiple_classes.h5')
###Output
_____no_output_____
###Markdown
Load Saved Multi-class Model
###Code
# from numpy import loadtxt
# from keras.models import load_model
# model = load_model('/content/drive/My Drive/final_project_data/models/multiple_classes.h5')
# multi_class_model.summary()
hist_multi = multi_class_model_1.history
loss_values = hist_multi['loss']
val_loss_values = hist_multi['val_loss']
acc_values = hist_multi['acc']
val_acc_values = hist_multi['val_acc']
epochs = range(1, len(loss_values) + 1)
plt.figure(figsize=(15, 4))
plt.subplot(121)
plt.plot(epochs, loss_values, 'g.', label='Training loss')
plt.plot(epochs, val_loss_values, 'g', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(122)
plt.plot(epochs, acc_values, 'r.', label='Training acc')
plt.plot(epochs, val_acc_values, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate Model
###Code
final_validation = ImageDataGenerator(rescale=1./255).flow_from_directory(
'/content/drive/My Drive/final_project_data/multiclass/test',
target_size=(224, 224),
batch_size=1,
class_mode='categorical',
seed=123)
multi_final_eval = multi_class_model.evaluate_generator(final_validation,
steps=2776)
###Output
/usr/local/lib/python3.6/dist-packages/PIL/Image.py:914: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
'to RGBA images')
###Markdown
Loss, Single Target Accuracy, Top-3 Accuracy
###Code
multi_final_eval
###Output
_____no_output_____
###Markdown
Pull Particular Image & Label (i)
###Code
predictions_transfer = multi_class_model.predict(multi_images_tr)
labels = [label for label in multi_data_te.class_indices]
k = 3
i = 97 # Range is limited by batch size
top_k_predictions = [x[:k] for x in (-predictions_transfer).argsort()]
top_values_index = top_k_predictions[i]
plt.imshow(multi_images_tr[i])
print('Top 3 guesses: {}'.format(
[labels[i].replace('_test', '') for i in top_values_index]))
print(labels[multi_labels_tr[i].argmax()].replace('_test', ''))
###Output
Top 3 guesses: ['cougar', 'ringtail', 'red_fox']
cougar
###Markdown
Confusion Matrix
###Code
predictions_transfer = multi_class_model.predict(multi_images_te)
X_train = multi_images_tr
X_test = multi_images_te
y_train = multi_labels_tr
y_test = multi_labels_te
y_pred = np.argmax(predictions_transfer, axis=1)
y_true = np.where(y_test != 0)[1]
# Calculate Confusion Matrix
cm = confusion_matrix(y_true, y_pred)
# classes = classes[unique_labels(y_true, y_pred)]
# Figure adjustment and heatmap plot
f = plt.figure(figsize=(20, 30))
ax = plt.subplot()
sns.heatmap(cm, annot=True, ax=ax, vmax=100, cbar=False, cmap='Paired',
mask=(cm == 0), fmt=',.0f', linewidths=2, linecolor='grey', )
# labels
ax.set_xlabel('Predicted labels', fontsize=16)
ax.set_ylabel('True labels', labelpad=30, fontsize=16)
ax.set_title('Confusion Matrix', fontsize=18)
ax.xaxis.set_ticklabels(labels, rotation=90)
ax.tick_params(axis="x", labelsize=18)
ax.yaxis.set_ticklabels(labels, rotation=0)
ax.tick_params(axis="y", labelsize=18)
ax.set_facecolor('white')
# # Calculate Confusion Matrix
# cm = confusion_matrix(y_true, y_pred)
# # classes = classes[unique_labels(y_true, y_pred)]
# # Figure adjustment and heatmap plot
# f = plt.figure(figsize=(20,30))
# ax= plt.subplot()
# sns.heatmap(cm, annot=True, ax = ax, vmax=100, cbar=False, cmap='Paired', mask=(cm==0), fmt=',.0f', linewidths=2, linecolor='grey', );
# # labels
# ax.set_xlabel('Predicted labels', fontsize=16);
# ax.set_ylabel('True labels', labelpad=30, fontsize=16);
# ax.set_title('Confusion Matrix', fontsize=18);
# ax.xaxis.set_ticklabels(labels, rotation=90);
# ax.yaxis.set_ticklabels(labels, rotation=0);
# ax.set_facecolor('white')
import numpy as np
# print(cm)
def get_cm_summary(cm):
np.array([[13, 0, 0],
[0, 10, 6],
[0, 0, 9]])
FP = cm.sum(axis=0) - np.diag(cm)
FN = cm.sum(axis=1) - np.diag(cm)
TP = np.diag(cm)
TN = cm.sum() - (FP + FN + TP)
FP = FP.astype(float)
FN = FN.astype(float)
TP = TP.astype(float)
TN = TN.astype(float)
# Sensitivity, hit rate, recall, or true positive rate
TPR = TP/(TP+FN)
# Specificity or true negative rate
TNR = TN/(TN+FP)
# Precision or positive predictive value
PPV = TP/(TP+FP)
# Negative predictive value
NPV = TN/(TN+FN)
# Fall out or false positive rate
FPR = FP/(FP+TN)
# False negative rate
FNR = FN/(TP+FN)
# False discovery rate
FDR = FP/(TP+FP)
# Overall accuracy
ACC = (TP+TN)/(TP+FP+FN+TN)
return TPR, PPV, ACC
get_cm_summary(cm)
predictions_transfer = multi_class_model.predict(X_test)
labels = [label for label in multi_data_te.class_indices]
k = 3
i = 890
top_k_predictions = [x[:k] for x in (-predictions_transfer).argsort()]
top_values_index = top_k_predictions[i]
plt.imshow(X_test[i])
print('Top 3 guesses: {}'.format(
[labels[i].replace('_test', '') for i in top_values_index]))
print(labels[y_test[i].argmax()].replace('_test', ''))
multi_data_te.class_indices
###Output
_____no_output_____
###Markdown
Top-k Categorical Accuracy
###Code
import functools
from keras.metrics import top_k_categorical_accuracy
top_5_accuracy = functools.partial(top_k_categorical_accuracy, k=5)
import functools
top10_acc = functools.partial(top_k_categorical_accuracy, k=10)
top10_acc.__name__ = 'top10_acc'
top3_acc = functools.partial(top_k_categorical_accuracy, k=3)
top3_acc.__name__ = 'top3_acc'
cnn.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy", top3_acc, top10_acc])
###Output
_____no_output_____
###Markdown
Classify New File
###Code
new_files = ImageDataGenerator(rescale=1./255).flow_from_directory(
'/content/drive/My Drive/final_project_data/matts_game_cam/test/',
target_size=(224, 224),
batch_size=10,
class_mode='categorical',
seed=123)
new_file_images, new_file_labels = next(new_files)
predictions_transfer_new = multi_class_model.predict(new_file_images)
labels = [label for label in multi_data_te.class_indices]
k = 3
i = 3
top_k_predictions = [x[:k] for x in (-predictions_transfer_new).argsort()]
top_values_index = top_k_predictions[i]
plt.imshow(new_file_images[i])
print('Top 3 guesses: {}'.format(
[labels[i].replace('_test', '') for i in top_values_index]))
###Output
Top 3 guesses: ['gray_wolf', 'gray_fox', 'red_fox']
###Markdown
Extract Location Data from Image Files
###Code
from PIL import Image
from PIL.ExifTags import TAGS, GPSTAGS
def get_exif(filename):
exif = Image.open(filename)._getexif()
if exif is not None:
for key, value in exif.items():
name = TAGS.get(key, key)
exif[name] = exif.pop(key)
if 'GPSInfo' in exif:
for key in exif['GPSInfo'].keys():
name = GPSTAGS.get(key, key)
exif['GPSInfo'][name] = exif['GPSInfo'].pop(key)
lat = [([i][0][0]) for i in exif['GPSInfo']['GPSLatitude']]
lat = float(str(lat[0])+'.'+str(lat[1])+str(lat[2]))
long = [([i][0][0]) for i in exif['GPSInfo']['GPSLongitude']]
long = -(float(str(long[0])+'.'+str(long[1])+str(long[2])))
return lat, long
exif = get_exif('/Users/j.markdaniels/Desktop/Data/matt_game_cam/IMG_4570.jpg')
exif
###Output
_____no_output_____
###Markdown
Plot to a Folium Map
###Code
import folium
oregon_map = folium.Map(location=[43.8, -120.5], zoom_start=7)
oregon_map
import folium
# Load an image and run function to extract location data
image_loc = get_exif('/Users/j.markdaniels/Desktop/Hollin Farms.jpg')
current_map = folium.Map(location=image_loc, zoom_start=7)
def to_marker(location):
return folium.Circle((location), radius=3, prefer_canvas=True, color='blue')
def add_markers(markers, map_obj):
for marker in markers:
marker.add_to(map_obj)
return map_obj
image_marker = to_marker(image_loc)
image_marker.add_to(current_map)
current_map
###Output
_____no_output_____ |
ai-platform-unified/notebooks/unofficial/gapic/custom/showcase_custom_image_classification_batch.ipynb | ###Markdown
Vertex AI client library: Custom training image classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex AI Python client library to train and deploy a custom image classification model for batch prediction. DatasetThe dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. ObjectiveIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex AI client library, and then do a batch prediction on the uploaded model. You can alternatively create custom models using `gcloud` command-line tool or online using Google Cloud Console.The steps performed include:- Create a Vertex AI custom job for training a model.- Train the TensorFlow model.- Retrieve and load the model artifacts.- View the model evaluation.- Upload the model as a Vertex AI `Model` resource.- Make a batch prediction. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/ai-platform-unified/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex AI client library.
###Code
import sys
if "google.colab" in sys.modules:
USER_FLAG = ""
else:
USER_FLAG = "--user"
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex AI client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex AI APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Vertex AI Notebooks.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. For the latest support per region, see the [Vertex AI locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a custom training job using the Vertex AI client library, you upload a Python packagecontaining your training code to a Cloud Storage bucket. Vertex AI runsthe code from this package. In this tutorial, Vertex AI also saves thetrained model that results from your job in the same bucket. You can thencreate an `Endpoint` resource based on this output in order to serveonline predictions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex AI client libraryImport the Vertex AI client library into our Python environment.
###Code
import os
import sys
import time
import google.cloud.aiplatform_v1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex AI constantsSetup up the following constants for Vertex AI:- `API_ENDPOINT`: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex AI location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Container (Docker) imageNext, we will set the Docker container images for training and prediction - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` - TensorFlow 2.4 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` - XGBoost - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` - Scikit-learn - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` - Pytorch - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers). - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` - XGBoost - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` - Scikit-learn - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers)
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Machine TypeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own custom model and training for CIFAR10. Set up clientsThe Vertex AI client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex AI server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Model Service for `Model` resources.- Endpoint Service for deployment.- Job Service for batch jobs and custom training.- Prediction Service for serving.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
Train a modelThere are two ways you can train a custom model using a container image:- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. Prepare your custom job specificationNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)- `python_package_spec` : The specification of the Python package to be installed with the pre-built container. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex AI what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex AI what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define the worker pool specificationNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:- `replica_count`: The number of instances to provision of this machine type.- `machine_spec`: The hardware specification.- `disk_spec` : (optional) The disk storage specification.- `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.Let's dive deeper now into the python package specification:-`executor_image_spec`: This is the docker image which is configured for your custom training job.-`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.-`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix.-`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--steps=" + STEPS`: The number of steps (batches) per epoch. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances.
###Code
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Assemble a job specificationNow assemble the complete description for the custom job specification:- `display_name`: The human readable name you assign to this custom job.- `job_spec`: The specification for the custom job. - `worker_pool_specs`: The specification for the machine VM instances. - `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form: /model
###Code
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
###Output
_____no_output_____
###Markdown
Examine the training package Package layoutBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.pyThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). Package AssemblyIn the following cells, you will assemble the training package.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Task.py contentsIn the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.- Loads CIFAR10 dataset from TF Datasets (tfds).- Builds a model using TF.Keras model API.- Compiles the model (`compile()`).- Sets a training distribution strategy according to the argument `args.distribute`.- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
###Code
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
###Output
_____no_output_____
###Markdown
Train the modelNow start the training of your custom training job on Vertex AI. Use this helper function `create_custom_job`, which takes the following parameter:-`custom_job`: The specification for the custom job.The helper function calls job client service's `create_custom_job` method, with the following parameters:-`parent`: The Vertex AI location path to `Dataset`, `Model` and `Endpoint` resources.-`custom_job`: The specification for the custom job.You will display a handful of the fields returned in `response` object, with the two that are of most interest are:`response.name`: The Vertex AI fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.`response.state`: The current state of the custom training job.
###Code
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the custom job you created.
###Code
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
###Output
_____no_output_____
###Markdown
Get information on a custom jobNext, use this helper function `get_custom_job`, which takes the following parameter:- `name`: The Vertex AI fully qualified identifier for the custom job.The helper function calls the job client service's`get_custom_job` method, with the following parameter:- `name`: The Vertex AI fully qualified identifier for the custom job.If you recall, you got the Vertex AI fully qualified identifier for the custom job in the `response.name` field when you called the `create_custom_job` method, and saved the identifier in the variable `job_id`.
###Code
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 20 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
###Code
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
###Output
_____no_output_____
###Markdown
Load the saved modelYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Evaluate the modelNow find out how good the model is. Load evaluation dataYou will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.You don't need the training data, and hence why we loaded it as `(_, _)`.Before you can run the data through evaluation, you need to preprocess it:x_test:1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.y_test:2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
###Code
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Perform the model evaluationNow evaluate how well the model in the custom job did.
###Code
model.evaluate(x_test, y_test)
###Output
_____no_output_____
###Markdown
Upload the model for servingNext, you will upload your TF.Keras model from the custom job to Vertex AI `Model` service, which will create a Vertex AI `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function workWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.The serving function consists of two parts:- `preprocessing function`: - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph). - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.- `post-processing function`: - Converts the model output to format expected by the receiving application -- e.q., compresses the output. - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. Serving function for image dataTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).- `image.convert_image_dtype` - Changes integer pixel values to float 32.- `image.resize` - Resizes the image to match the input shape for the model.- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.At this point, the data can be passed to the model (`m_call`).
###Code
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
###Output
_____no_output_____
###Markdown
Get the serving function signatureYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
###Code
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
###Output
_____no_output_____
###Markdown
Upload the modelUse this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex AI `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex AI `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.The helper function takes the following parameters:- `display_name`: A human readable name for the `Endpoint` service.- `image_uri`: The container image for the model deployment.- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:- `parent`: The Vertex AI location root path for `Dataset`, `Model` and `Endpoint` resources.- `model`: The specification for the Vertex AI `Model` resource instance.Let's now dive deeper into the Vertex AI model specification `model`. This is a dictionary object that consists of the following fields:- `display_name`: A human readable name for the `Model` resource.- `metadata_schema_uri`: Since your model was built without an Vertex AI `Dataset` resource, you will leave this blank (`''`).- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.Uploading a model into a Vertex AI Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex AI Model resource is ready.The helper function returns the Vertex AI fully qualified identifier for the corresponding Vertex AI Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
###Code
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
###Output
_____no_output_____
###Markdown
Get `Model` resource informationNow let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:- `name`: The Vertex AI unique identifier for the `Model` resource.This helper function calls the Vertex AI `Model` client service's method `get_model`, with the following parameter:- `name`: The Vertex AI unique identifier for the `Model` resource.
###Code
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex AI `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test itemsYou will use examples out of the test (holdout) portion of the dataset as a test items.
###Code
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
###Output
_____no_output_____
###Markdown
Prepare the request contentYou are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image. - Denormalize the image data from \[0,1) range back to [0,255). - Convert the 32-bit floating point values to 8-bit unsigned integers.
###Code
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `input_name`: the name of the input layer of the underlying model.- `'b64'`: A key that indicates the content is base64 encoded.- `content`: The compressed JPG image bytes as a base64 encoded string.Each instance in the prediction request is a dictionary entry of the form: {serving_input: {'b64': content}}To pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.- `tf.io.read_file`: Read the compressed JPG images into memory as raw bytes.- `base64.b64encode`: Encode the raw bytes into a base64 encoded string.
###Code
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex AI fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex AI location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex AI fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `csv` or `jsonl`. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `csv` or `jsonl`. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex AI fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex AI fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
###Code
BATCH_MODEL = "cifar10_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl"
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME
)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the batch prediction job you created.
###Code
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
###Output
_____no_output_____
###Markdown
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex AI fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex AI fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex AI fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
###Code
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
###Output
_____no_output_____
###Markdown
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `prediction.results-xxxxx-of-xxxxx`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.The response contains a JSON object for each instance, in the form:{ 'instance': { 'bytes_input': { 'b64': .... } }, 'prediction': [ ... ] }- `instance`: The image data for which this prediction.- `prediction`: The confidence level for each class.
###Code
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction.results*
print("Results:")
! gsutil cat $folder/prediction.results*
print("Errors:")
! gsutil cat $folder/prediction.errors*
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex AI fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex AI fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex AI fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex AI fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex client library: Custom training image classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom image classification model for batch prediction. DatasetThe dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. ObjectiveIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a batch prediction on the uploaded model. You can alternatively create custom models using `gcloud` command-line tool or online using Google Cloud Console.The steps performed include:- Create a Vertex custom job for training a model.- Train the TensorFlow model.- Retrieve and load the model artifacts.- View the model evaluation.- Upload the model as a Vertex `Model` resource.- Make a batch prediction. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
###Code
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a custom training job using the Vertex client library, you upload a Python packagecontaining your training code to a Cloud Storage bucket. Vertex runsthe code from this package. In this tutorial, Vertex also saves thetrained model that results from your job in the same bucket. You can thencreate an `Endpoint` resource based on this output in order to serveonline predictions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
###Code
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Container (Docker) imageNext, we will set the Docker container images for training and prediction - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` - TensorFlow 2.4 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` - XGBoost - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` - Scikit-learn - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` - Pytorch - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`For the latest list, see [Pre-built containers for training](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers). - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` - XGBoost - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` - Scikit-learn - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Machine TypeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own custom model and training for CIFAR10. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Model Service for `Model` resources.- Endpoint Service for deployment.- Job Service for batch jobs and custom training.- Prediction Service for serving.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
Train a modelThere are two ways you can train a custom model using a container image:- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. Prepare your custom job specificationNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)- `python_package_spec` : The specification of the Python package to be installed with the pre-built container. Prepare your machine specificationNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define the worker pool specificationNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:- `replica_count`: The number of instances to provision of this machine type.- `machine_spec`: The hardware specification.- `disk_spec` : (optional) The disk storage specification.- `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.Let's dive deeper now into the python package specification:-`executor_image_spec`: This is the docker image which is configured for your custom training job.-`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.-`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix.-`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--steps=" + STEPS`: The number of steps (batches) per epoch. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances.
###Code
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Assemble a job specificationNow assemble the complete description for the custom job specification:- `display_name`: The human readable name you assign to this custom job.- `job_spec`: The specification for the custom job. - `worker_pool_specs`: The specification for the machine VM instances. - `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form: /model
###Code
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
###Output
_____no_output_____
###Markdown
Examine the training package Package layoutBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.pyThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). Package AssemblyIn the following cells, you will assemble the training package.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Task.py contentsIn the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.- Loads CIFAR10 dataset from TF Datasets (tfds).- Builds a model using TF.Keras model API.- Compiles the model (`compile()`).- Sets a training distribution strategy according to the argument `args.distribute`.- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
###Code
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
###Output
_____no_output_____
###Markdown
Train the modelNow start the training of your custom training job on Vertex. Use this helper function `create_custom_job`, which takes the following parameter:-`custom_job`: The specification for the custom job.The helper function calls job client service's `create_custom_job` method, with the following parameters:-`parent`: The Vertex location path to `Dataset`, `Model` and `Endpoint` resources.-`custom_job`: The specification for the custom job.You will display a handful of the fields returned in `response` object, with the two that are of most interest are:`response.name`: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.`response.state`: The current state of the custom training job.
###Code
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the custom job you created.
###Code
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
###Output
_____no_output_____
###Markdown
Get information on a custom jobNext, use this helper function `get_custom_job`, which takes the following parameter:- `name`: The Vertex fully qualified identifier for the custom job.The helper function calls the job client service's`get_custom_job` method, with the following parameter:- `name`: The Vertex fully qualified identifier for the custom job.If you recall, you got the Vertex fully qualified identifier for the custom job in the `response.name` field when you called the `create_custom_job` method, and saved the identifier in the variable `job_id`.
###Code
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 20 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
###Code
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
###Output
_____no_output_____
###Markdown
Load the saved modelYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Evaluate the modelNow find out how good the model is. Load evaluation dataYou will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.You don't need the training data, and hence why we loaded it as `(_, _)`.Before you can run the data through evaluation, you need to preprocess it:x_test:1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.y_test:2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
###Code
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Perform the model evaluationNow evaluate how well the model in the custom job did.
###Code
model.evaluate(x_test, y_test)
###Output
_____no_output_____
###Markdown
Upload the model for servingNext, you will upload your TF.Keras model from the custom job to Vertex `Model` service, which will create a Vertex `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function workWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.The serving function consists of two parts:- `preprocessing function`: - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph). - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.- `post-processing function`: - Converts the model output to format expected by the receiving application -- e.q., compresses the output. - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. Serving function for image dataTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).- `image.convert_image_dtype` - Changes integer pixel values to float 32.- `image.resize` - Resizes the image to match the input shape for the model.- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.At this point, the data can be passed to the model (`m_call`).
###Code
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
###Output
_____no_output_____
###Markdown
Get the serving function signatureYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
###Code
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
###Output
_____no_output_____
###Markdown
Upload the modelUse this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a Vertex `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other Vertex `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.The helper function takes the following parameters:- `display_name`: A human readable name for the `Endpoint` service.- `image_uri`: The container image for the model deployment.- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:- `parent`: The Vertex location root path for `Dataset`, `Model` and `Endpoint` resources.- `model`: The specification for the Vertex `Model` resource instance.Let's now dive deeper into the Vertex model specification `model`. This is a dictionary object that consists of the following fields:- `display_name`: A human readable name for the `Model` resource.- `metadata_schema_uri`: Since your model was built without an Vertex `Dataset` resource, you will leave this blank (`''`).- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
###Code
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
###Output
_____no_output_____
###Markdown
Get `Model` resource informationNow let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:- `name`: The Vertex unique identifier for the `Model` resource.This helper function calls the Vertex `Model` client service's method `get_model`, with the following parameter:- `name`: The Vertex unique identifier for the `Model` resource.
###Code
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test itemsYou will use examples out of the test (holdout) portion of the dataset as a test items.
###Code
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
###Output
_____no_output_____
###Markdown
Prepare the request contentYou are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image. - Denormalize the image data from \[0,1) range back to [0,255). - Convert the 32-bit floating point values to 8-bit unsigned integers.
###Code
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `input_name`: the name of the input layer of the underlying model.- `'b64'`: A key that indicates the content is base64 encoded.- `content`: The compressed JPG image bytes as a base64 encoded string.Each instance in the prediction request is a dictionary entry of the form: {serving_input: {'b64': content}}To pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.- `tf.io.read_file`: Read the compressed JPG images into memory as raw bytes.- `base64.b64encode`: Encode the raw bytes into a base64 encoded string.
###Code
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `csv` or `jsonl`. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `csv` or `jsonl`. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
###Code
BATCH_MODEL = "cifar10_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl"
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME
)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the batch prediction job you created.
###Code
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
###Output
_____no_output_____
###Markdown
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
###Code
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
###Output
_____no_output_____
###Markdown
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `prediction.results-xxxxx-of-xxxxx`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.The response contains a JSON object for each instance, in the form:{ 'instance': { 'bytes_input': { 'b64': .... } }, 'prediction': [ ... ] }- `instance`: The image data for which this prediction.- `prediction`: The confidence level for each class.
###Code
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction.results*
print("Results:")
! gsutil cat $folder/prediction.results*
print("Errors:")
! gsutil cat $folder/prediction.errors*
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
AI Platform (Unified) client library: Custom training image classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the AI Platform (Unified) Python client library to train and deploy a custom image classification model for batch prediction. DatasetThe dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. ObjectiveIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the AI Platform (Unified) client library, and then do a batch prediction on the uploaded model. You can alternatively create custom models using `gcloud` command-line tool or online using Google Cloud Console.The steps performed include:- Create a AI Platform (Unified) custom job for training a model.- Train the TensorFlow model.- Retrieve and load the model artifacts.- View the model evaluation.- Upload the model as a AI Platform (Unified) `Model` resource.- Make a batch prediction. CostsThis tutorial uses billable components of Google Cloud (GCP):* AI Platform (Unified)* Cloud StorageLearn about [AI Platform (Unified)pricing](https://cloud.google.com/ai-platform-unified/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of AI Platform (Unified) client library.
###Code
import sys
if "google.colab" in sys.modules:
USER_FLAG = ""
else:
USER_FLAG = "--user"
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the AI Platform (Unified) client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the AI Platform (Unified) APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in AI Platform (Unified) Notebooks.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for AI Platform (Unified). We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with AI Platform (Unified). Not all regions provide support for all AI Platform (Unified) services. For the latest support per region, see the [AI Platform (Unified) locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using AI Platform (Unified) Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "AI Platform (Unified)" into the filter box, and select **AI Platform (Unified) Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a custom training job using the AI Platform (Unified) client library, you upload a Python packagecontaining your training code to a Cloud Storage bucket. AI Platform (Unified) runsthe code from this package. In this tutorial, AI Platform (Unified) also saves thetrained model that results from your job in the same bucket. You can thencreate an `Endpoint` resource based on this output in order to serveonline predictions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import AI Platform (Unified) client libraryImport the AI Platform (Unified) client library into our Python environment.
###Code
import os
import sys
import time
import google.cloud.aiplatform_v1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
AI Platform (Unified) constantsSetup up the following constants for AI Platform (Unified):- `API_ENDPOINT`: The AI Platform (Unified) API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The AI Platform (Unified) location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# AI Platform (Unified) location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for training and prediction.Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
###Code
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
###Output
_____no_output_____
###Markdown
Container (Docker) imageNext, we will set the Docker container images for training and prediction - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest` - TensorFlow 2.4 - `gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest` - `gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest` - XGBoost - `gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1` - Scikit-learn - `gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest` - Pytorch - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest` - `gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest`For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers). - TensorFlow 1.15 - `gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest` - `gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest` - TensorFlow 2.1 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest` - TensorFlow 2.2 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest` - TensorFlow 2.3 - `gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest` - `gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest` - XGBoost - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest` - `gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest` - Scikit-learn - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest` - `gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest`For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers)
###Code
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
###Output
_____no_output_____
###Markdown
Machine TypeNext, set the machine type to use for training and prediction.- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: The following is not supported for training:* - `standard`: 2 vCPUs - `highcpu`: 2, 4 and 8 vCPUs*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
###Code
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own custom model and training for CIFAR10. Set up clientsThe AI Platform (Unified) client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the AI Platform (Unified) server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Model Service for `Model` resources.- Endpoint Service for deployment.- Job Service for batch jobs and custom training.- Prediction Service for serving.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
Train a modelThere are two ways you can train a custom model using a container image:- **Use a Google Cloud prebuilt container**. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.- **Use your own custom container image**. If you use your own container, the container needs to contain your code for training a custom model. Prepare your custom job specificationNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:- `worker_pool_spec` : The specification of the type of machine(s) you will use for training and how many (single or distributed)- `python_package_spec` : The specification of the Python package to be installed with the pre-built container. Prepare your machine specificationNow define the machine specification for your custom training job. This tells AI Platform (Unified) what type of machine instance to provision for the training. - `machine_type`: The type of GCP instance to provision -- e.g., n1-standard-8. - `accelerator_type`: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable `TRAIN_GPU != None`, you are using a GPU; otherwise you will use a CPU. - `accelerator_count`: The number of accelerators.
###Code
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
###Output
_____no_output_____
###Markdown
Prepare your disk specification(optional) Now define the disk specification for your custom training job. This tells AI Platform (Unified) what type and size of disk to provision in each machine instance for the training. - `boot_disk_type`: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. - `boot_disk_size_gb`: Size of disk in GB.
###Code
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
###Output
_____no_output_____
###Markdown
Define the worker pool specificationNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:- `replica_count`: The number of instances to provision of this machine type.- `machine_spec`: The hardware specification.- `disk_spec` : (optional) The disk storage specification.- `python_package`: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.Let's dive deeper now into the python package specification:-`executor_image_spec`: This is the docker image which is configured for your custom training job.-`package_uris`: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.-`python_module`: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking `trainer.task.py` -- note that it was not neccessary to append the `.py` suffix.-`args`: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - `"--model-dir=" + MODEL_DIR` : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification. - `"--epochs=" + EPOCHS`: The number of epochs for training. - `"--steps=" + STEPS`: The number of steps (batches) per epoch. - `"--distribute=" + TRAIN_STRATEGY"` : The training distribution strategy to use for single or distributed training. - `"single"`: single device. - `"mirror"`: all GPU devices on a single compute instance. - `"multi"`: all GPU devices on all compute instances.
###Code
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
###Output
_____no_output_____
###Markdown
Assemble a job specificationNow assemble the complete description for the custom job specification:- `display_name`: The human readable name you assign to this custom job.- `job_spec`: The specification for the custom job. - `worker_pool_specs`: The specification for the machine VM instances. - `base_output_directory`: This tells the service the Cloud Storage location where to save the model artifacts (when variable `DIRECT = False`). The service will then pass the location to the training script as the environment variable `AIP_MODEL_DIR`, and the path will be of the form: /model
###Code
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
###Output
_____no_output_____
###Markdown
Examine the training package Package layoutBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.- PKG-INFO- README.md- setup.cfg- setup.py- trainer - \_\_init\_\_.py - task.pyThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`). Package AssemblyIn the following cells, you will assemble the training package.
###Code
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: AI Platform (Unified)"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
###Output
_____no_output_____
###Markdown
Task.py contentsIn the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.- Loads CIFAR10 dataset from TF Datasets (tfds).- Builds a model using TF.Keras model API.- Compiles the model (`compile()`).- Sets a training distribution strategy according to the argument `args.distribute`.- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
###Code
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
###Output
_____no_output_____
###Markdown
Store training script on your Cloud Storage bucketNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
###Code
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
###Output
_____no_output_____
###Markdown
Train the modelNow start the training of your custom training job on AI Platform (Unified). Use this helper function `create_custom_job`, which takes the following parameter:-`custom_job`: The specification for the custom job.The helper function calls job client service's `create_custom_job` method, with the following parameters:-`parent`: The AI Platform (Unified) location path to `Dataset`, `Model` and `Endpoint` resources.-`custom_job`: The specification for the custom job.You will display a handful of the fields returned in `response` object, with the two that are of most interest are:`response.name`: The AI Platform (Unified) fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.`response.state`: The current state of the custom training job.
###Code
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the custom job you created.
###Code
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
###Output
_____no_output_____
###Markdown
Get information on a custom jobNext, use this helper function `get_custom_job`, which takes the following parameter:- `name`: The AI Platform (Unified) fully qualified identifier for the custom job.The helper function calls the job client service's`get_custom_job` method, with the following parameter:- `name`: The AI Platform (Unified) fully qualified identifier for the custom job.If you recall, you got the AI Platform (Unified) fully qualified identifier for the custom job in the `response.name` field when you called the `create_custom_job` method, and saved the identifier in the variable `job_id`.
###Code
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 20 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at `MODEL_DIR + '/saved_model.pb'`.
###Code
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
###Output
_____no_output_____
###Markdown
Load the saved modelYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
###Code
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
###Output
_____no_output_____
###Markdown
Evaluate the modelNow find out how good the model is. Load evaluation dataYou will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.You don't need the training data, and hence why we loaded it as `(_, _)`.Before you can run the data through evaluation, you need to preprocess it:x_test:1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.y_test:2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
###Code
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
Perform the model evaluationNow evaluate how well the model in the custom job did.
###Code
model.evaluate(x_test, y_test)
###Output
_____no_output_____
###Markdown
Upload the model for servingNext, you will upload your TF.Keras model from the custom job to AI Platform (Unified) `Model` service, which will create a AI Platform (Unified) `Model` resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform (Unified), your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function workWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a `tf.string`.The serving function consists of two parts:- `preprocessing function`: - Converts the input (`tf.string`) to the input shape and data type of the underlying model (dynamic graph). - Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.- `post-processing function`: - Converts the model output to format expected by the receiving application -- e.q., compresses the output. - Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. Serving function for image dataTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).- `image.convert_image_dtype` - Changes integer pixel values to float 32.- `image.resize` - Resizes the image to match the input shape for the model.- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.At this point, the data can be passed to the model (`m_call`).
###Code
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
###Output
_____no_output_____
###Markdown
Get the serving function signatureYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
###Code
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
###Output
_____no_output_____
###Markdown
Upload the modelUse this helper function `upload_model` to upload your model, stored in SavedModel format, up to the `Model` service, which will instantiate a AI Platform (Unified) `Model` resource instance for your model. Once you've done that, you can use the `Model` resource instance in the same way as any other AI Platform (Unified) `Model` resource instance, such as deploying to an `Endpoint` resource for serving predictions.The helper function takes the following parameters:- `display_name`: A human readable name for the `Endpoint` service.- `image_uri`: The container image for the model deployment.- `model_uri`: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the `trainer/task.py` saved the model artifacts, which we specified in the variable `MODEL_DIR`.The helper function calls the `Model` client service's method `upload_model`, which takes the following parameters:- `parent`: The AI Platform (Unified) location root path for `Dataset`, `Model` and `Endpoint` resources.- `model`: The specification for the AI Platform (Unified) `Model` resource instance.Let's now dive deeper into the AI Platform (Unified) model specification `model`. This is a dictionary object that consists of the following fields:- `display_name`: A human readable name for the `Model` resource.- `metadata_schema_uri`: Since your model was built without an AI Platform (Unified) `Dataset` resource, you will leave this blank (`''`).- `artificat_uri`: The Cloud Storage path where the model is stored in SavedModel format.- `container_spec`: This is the specification for the Docker container that will be installed on the `Endpoint` resource, from which the `Model` resource will serve predictions. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated.Uploading a model into a AI Platform (Unified) Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the AI Platform (Unified) Model resource is ready.The helper function returns the AI Platform (Unified) fully qualified identifier for the corresponding AI Platform (Unified) Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
###Code
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
###Output
_____no_output_____
###Markdown
Get `Model` resource informationNow let's get the model information for just your model. Use this helper function `get_model`, with the following parameter:- `name`: The AI Platform (Unified) unique identifier for the `Model` resource.This helper function calls the AI Platform (Unified) `Model` client service's method `get_model`, with the following parameter:- `name`: The AI Platform (Unified) unique identifier for the `Model` resource.
###Code
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained AI Platform (Unified) `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test itemsYou will use examples out of the test (holdout) portion of the dataset as a test items.
###Code
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
###Output
_____no_output_____
###Markdown
Prepare the request contentYou are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image. - Denormalize the image data from \[0,1) range back to [0,255). - Convert the 32-bit floating point values to 8-bit unsigned integers.
###Code
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `input_name`: the name of the input layer of the underlying model.- `'b64'`: A key that indicates the content is base64 encoded.- `content`: The compressed JPG image bytes as a base64 encoded string.Each instance in the prediction request is a dictionary entry of the form: {serving_input: {'b64': content}}To pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.- `tf.io.read_file`: Read the compressed JPG images into memory as raw bytes.- `base64.b64encode`: Encode the raw bytes into a base64 encoded string.
###Code
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The AI Platform (Unified) fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The AI Platform (Unified) location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The AI Platform (Unified) fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `csv` or `jsonl`. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `csv` or `jsonl`. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The AI Platform (Unified) fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The AI Platform (Unified) fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
###Code
BATCH_MODEL = "cifar10_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl"
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME
)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the batch prediction job you created.
###Code
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
###Output
_____no_output_____
###Markdown
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The AI Platform (Unified) fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The AI Platform (Unified) fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the AI Platform (Unified) fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
###Code
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
###Output
_____no_output_____
###Markdown
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `prediction.results-xxxxx-of-xxxxx`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.The response contains a JSON object for each instance, in the form:{ 'instance': { 'bytes_input': { 'b64': .... } }, 'prediction': [ ... ] }- `instance`: The image data for which this prediction.- `prediction`: The confidence level for each class.
###Code
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction.results*
print("Results:")
! gsutil cat $folder/prediction.results*
print("Errors:")
! gsutil cat $folder/prediction.errors*
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the AI Platform (Unified) fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the AI Platform (Unified) fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the AI Platform (Unified) fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the AI Platform (Unified) fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the AI Platform (Unified) fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the AI Platform (Unified) fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the AI Platform (Unified) fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
Day6_Question_1.ipynb | ###Markdown
###Code
# class for Bank_Account
class BankAccount:
def __init__(self):
self.ownerName="Simran Singh"
self.Balance=0
def deposit(self):
Amount=float(input("\nEnter amount to be Deposited : "))
self.Balance += Amount
print("\nAmount Deposited : ",Amount)
def withdraw(self):
Amount = float(input("\nEnter amount to be Withdrawn : "))
if self.Balance >= Amount:
self.Balance -= Amount
print("\nYou've withdrawn :", Amount)
else:
print("\nInsufficient balance in your account. You can withdraw only upto", self.Balance)
print("\n---WELCOME TO YOUR BANK ACCOUNT---")
BA = BankAccount()
print("\nAccount Holder Name : ",BA.ownerName)
print("\nInitial Account Balance : ",BA.Balance)
BA.deposit()
BA.withdraw()
print("\nNet Avaliable balance is : ",BA.Balance)
###Output
---WELCOME TO YOUR BANK ACCOUNT---
Account Holder Name : Simran Singh
Initial Account Balance : 0
Enter amount to be Deposited : 60000
Amount Deposited : 60000.0
Enter amount to be Withdrawn : 650000
Insufficient balance in your account. You can withdraw only upto 60000.0
Net Avaliable balance is : 60000.0
|
notebooks/figure3_DIMA.ipynb | ###Markdown
DIMA Histogram
###Code
rep_list = list(ica_data.sample_table.reset_index().groupby(["full_name"])['index'].apply(list))
df_dima = pd.read_csv('./data/dima_combined_final.csv', index_col=0)
sns.set_style('ticks')
dima_list = df_dima.values.tolist()
dima_list = list(itertools.chain.from_iterable(dima_list))
dima_list = [x for x in dima_list if str(x) != 'nan']
from statistics import mean
dima_mean = int(round(mean(dima_list),0))
fig, ax = plt.subplots(figsize = (3.3,2))
sns.distplot(dima_list, kde=False, bins=20, color='#3a3596')
ax.axvline(dima_mean, 0,1, color = '#3F4EA2')
ax.text(dima_mean, 9400, 'Avg. # of DIMAs = '+str(dima_mean), color='#3F4EA2')
ax.spines['top'].set_color('0'); ax.spines['bottom'].set_color('0')
ax.spines['left'].set_color('0'); ax.spines['right'].set_color('0')
ax.spines['top'].set_linewidth(2); ax.spines['bottom'].set_linewidth(2)
ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(2)
ax.set_ylim(0,10300)
ax.set_ylabel('Frequency', fontweight ='bold')
ax.set_xlabel('Number of DIMAs between Conditions', fontweight ='bold' )
plt.savefig('./fig3/dimas_histogram.pdf', dpi = 600, bbox_inches = 'tight')
dima_collapse = pd.melt(df_dima, ignore_index=False).dropna().reset_index().rename(columns={'index':'cond1', 'variable':'cond2', 'value':'Number of DIMAs'})
dima_collapse['cond1'] = dima_collapse['cond1'].str.replace('_',':')
dima_collapse['cond2'] = dima_collapse['cond2'].str.replace('_',':')
dima_collapse = dima_collapse[dima_collapse['cond1'] != dima_collapse['cond2']]
dima_collapse['cond1_cond2'] = dima_collapse['cond1']+"_"+dima_collapse['cond2']
dima_collapse.head()
degs = pd.read_csv('./data/degs_combined.csv', index_col=0)
degs_collapse = pd.melt(degs, ignore_index=False).dropna().reset_index().rename(columns={'index':'cond1', 'variable':'cond2','value':'Number of DEGs'})
degs_collapse['cond1'] = degs_collapse['cond1'].str.replace('_',':')
degs_collapse['cond2'] = degs_collapse['cond2'].str.replace('_',':')
degs_collapse = degs_collapse[degs_collapse['cond1'] != degs_collapse['cond2']]
degs_collapse['cond1_cond2'] = degs_collapse['cond1']+"_"+degs_collapse['cond1']
degs_collapse.head()
degs_collapse2 = pd.melt(degs, ignore_index=False).dropna().reset_index().rename(columns={'index':'cond1', 'variable':'cond2','value':'Number of DEGs'})
degs_collapse2['cond1'] = degs_collapse2['cond1'].str.replace('_',':')
degs_collapse2['cond2'] = degs_collapse2['cond2'].str.replace('_',':')
degs_collapse2 = degs_collapse2[degs_collapse2['cond1'] != degs_collapse2['cond2']]
degs_collapse2['cond1_cond2'] = degs_collapse2['cond2']+"_"+degs_collapse2['cond1']
degs_collapse2.head()
df_com = pd.concat([dima_collapse.merge(degs_collapse, on='cond1_cond2'), dima_collapse.merge(degs_collapse2, on='cond1_cond2')],axis=0)
df_com.head()
fig, ax = plt.subplots(figsize=(3,3))
y=df_com["Number of DEGs"]
x=df_com["Number of DIMAs"]
ax.scatter(x=x, y=y, color='gray', alpha=.2, s=6)
#best fit line
m, b = np.polyfit(x, y, 1)
ax.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)),
ls = '--', color='blue')
ax.text(0,2050, 'Best fit: '+str(round(m,1))+'x+'+str(round(b,1)),
fontweight ='bold', color='blue')
#format
ax.spines['top'].set_color('0'); ax.spines['bottom'].set_color('0')
ax.spines['left'].set_color('0'); ax.spines['right'].set_color('0')
ax.spines['top'].set_linewidth(2); ax.spines['bottom'].set_linewidth(2)
ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(2)
ax.set_xlabel('Number of DIMAs', fontweight ='bold')
ax.set_ylabel('Number of DEGs', fontweight ='bold' )
plt.savefig('./fig3/degs_versus_dima.pdf', dpi = 600, bbox_inches = 'tight')
# heat version
fig, ax = plt.subplots(figsize=(3,2.5))
y=df_com["Number of DEGs"]
x=df_com["Number of DIMAs"]
# code from https://stackoverflow.com/questions/2369492/generate-a-heatmap-in-matplotlib-using-a-scatter-data-set
heatmap, xedges, yedges = np.histogram2d(x, y, bins=(50,50))
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
scatterheat = ax.imshow(heatmap.T, extent=extent, origin='lower',
aspect='auto', cmap='RdPu')
plt.colorbar(scatterheat)
# best fit line
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
# R^2 calc
y_bar = y.mean()
y_pred = [m*x_i+b for x_i in x]
SS_TOT = sum([(y_i-y_bar)**2 for y_i in y])
SS_RES = sum((y - y_pred)**2)
R_2 = 1- SS_RES/SS_TOT
# plot fit line
ax.plot(np.unique(x), m*(np.unique(x))+b,
ls = '--', color='#4a9696')
ax.text(5,1650, 'Best fit: '+str(round(m,1))+'x+'+str(round(b,1))+'\n $R^2$='+str(round(R_2,2)),
fontweight ='bold', color='#4a9696')
#format
ax.spines['top'].set_color('0'); ax.spines['bottom'].set_color('0')
ax.spines['left'].set_color('0'); ax.spines['right'].set_color('0')
ax.spines['top'].set_linewidth(2); ax.spines['bottom'].set_linewidth(2)
ax.spines['left'].set_linewidth(2); ax.spines['right'].set_linewidth(2)
ax.set_xlabel('Number of DIMAs', fontweight ='bold')
ax.set_ylabel('Number of DEGs', fontweight ='bold' )
plt.savefig('./fig3/degs_versus_dima.pdf', dpi = 600, bbox_inches = 'tight')
###Output
_____no_output_____ |
notebooks/redwood.ipynb | ###Markdown
Conditional
###Code
num_points = 100
Xnew = [jnp.linspace(0, 1, num_points)[:,None], jnp.linspace(0, 1, num_points)[:,None]]
cond = npg.LatentKronConditional(model_fixed_lengthscale, gp='f', num_samples=1, rng_key=476)
mu, var = cond.conditional_from_guide(guide_fixed_lengthscale, svi.params, X, y, Xnew, 0.1)
with npy.handlers.seed(rng_seed=34):
x = npy.sample('pred', dist.MultivariateNormal(loc=mu, covariance_matrix=var))
fig, ax = plt.subplots(1, 3, figsize=(3*5, 4.5))
ax[0].plot(data['redwoodfull.x'], data['redwoodfull.y'], 'w.')
show_data(ax[0], X, y.reshape(20, -1).T, title='True data')
ax[1].plot(data['redwoodfull.x'], data['redwoodfull.y'], 'w.')
show_data(ax[1], X, jnp.exp(svi.mean('f', 'posterior_predictive')).reshape(-1, 20).T, title='Fitted GP')
im = ax[2].imshow(np.exp(x.mean(0).reshape(-1, num_points)).T, origin='lower', cmap='viridis', extent=(0,1,0,1))
ax[2].plot(data['redwoodfull.x'], data['redwoodfull.y'], 'w.')
divider = make_axes_locatable(ax[2])
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='vertical')
ax[2].set_title('Conditional')
svi.rng_key
###Output
_____no_output_____ |
Lab2/notebooks/lab2.ipynb | ###Markdown
Image Processing Laboratory - Practice Lab 2
DFT Computation We start by importing numpy, a python library that deals with numbers and has high compatibility with matrices.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
We initialize the 2 arrays with zeros, each entry having a datatype complex, indicating that the matrix contains complex number elements.
###Code
fft_basis = np.zeros((4,4), dtype=complex)
ift_basis = np.zeros((4,4), dtype=complex)
###Output
_____no_output_____
###Markdown
Q1. Compute the basis vectors of the forward 1D DFT for N=4 For each n and k in (0, N) and (0, N) we find the basis value
```
exp(-j 2pi kn/N)
```
this 2D matrix is the required basis vector
###Code
for n in range(4):
for k in range(4):
fft_basis[n][k] = np.exp(-1j*2*np.pi*k*n/4)
print(fft_basis)
###Output
[[ 1.0000000e+00+0.0000000e+00j 1.0000000e+00+0.0000000e+00j
1.0000000e+00+0.0000000e+00j 1.0000000e+00+0.0000000e+00j]
[ 1.0000000e+00+0.0000000e+00j 6.1232340e-17-1.0000000e+00j
-1.0000000e+00-1.2246468e-16j -1.8369702e-16+1.0000000e+00j]
[ 1.0000000e+00+0.0000000e+00j -1.0000000e+00-1.2246468e-16j
1.0000000e+00+2.4492936e-16j -1.0000000e+00-3.6739404e-16j]
[ 1.0000000e+00+0.0000000e+00j -1.8369702e-16+1.0000000e+00j
-1.0000000e+00-3.6739404e-16j 5.5109106e-16-1.0000000e+00j]]
###Markdown
Q2. Compute the basis vectors of the inverse 1D DFT for N=4
Similar to the above case,
For each n and k in (0, N) and (0, N) we find the basis value
```
exp(+j 2pi kn/N)
```
this 2D matrix is the required basis vector
###Code
for n in range(4):
for k in range(4):
ift_basis[n][k] = np.exp(1j*2*np.pi*k*n/4)
print(ift_basis)
###Output
[[ 1.0000000e+00+0.0000000e+00j 1.0000000e+00+0.0000000e+00j
1.0000000e+00+0.0000000e+00j 1.0000000e+00+0.0000000e+00j]
[ 1.0000000e+00+0.0000000e+00j 6.1232340e-17+1.0000000e+00j
-1.0000000e+00+1.2246468e-16j -1.8369702e-16-1.0000000e+00j]
[ 1.0000000e+00+0.0000000e+00j -1.0000000e+00+1.2246468e-16j
1.0000000e+00-2.4492936e-16j -1.0000000e+00+3.6739404e-16j]
[ 1.0000000e+00+0.0000000e+00j -1.8369702e-16-1.0000000e+00j
-1.0000000e+00+3.6739404e-16j 5.5109106e-16+1.0000000e+00j]]
###Markdown
Q3. Consider a sequence ```f(x) = [2, 3, 4, 4]```. Find the DFT of f(x). Finally, using the given f(x) and the basis vector we found, we can calculate the fft using
```
F(x) = 1/N * B.f(x)
```
where N is the number of discrete points(here 4), f is the column vector of the given discrete function, B is the basis vector and . is the dot operator for matrix multiplication
###Code
f = [2, 3, 4, 4]
print(1/4 * np.dot(fft_basis, np.transpose(f)))
###Output
[ 3.25+0.0000000e+00j -0.5 +2.5000000e-01j -0.25-2.1431319e-16j
-0.5 -2.5000000e-01j]
###Markdown
Finally, to verify the calculated values, we use the built in function from numpy
###Code
np.fft.fft(f)/4
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.