markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Note that it is possible to nest multiple timeouts. | with ExpectTimeout(5):
with ExpectTimeout(3):
long_running_test()
long_running_test() | Start
0 seconds have passed
1 seconds have passed
| MIT | docs/notebooks/ExpectError.ipynb | bjrnmath/debuggingbook |
That's it, folks – enjoy! SynopsisThe `ExpectError` class allows you to catch and report exceptions, yet resume execution. This is useful in notebooks, as they would normally interrupt execution as soon as an exception is raised. Its typical usage is in conjunction with a `with` clause: | with ExpectError():
x = 1 / 0 | Traceback (most recent call last):
File "<ipython-input-1-264328004f25>", line 2, in <module>
x = 1 / 0
ZeroDivisionError: division by zero (expected)
| MIT | docs/notebooks/ExpectError.ipynb | bjrnmath/debuggingbook |
The `ExpectTimeout` class allows you to interrupt execution after the specified time. This is useful for interrupting code that might otherwise run forever. | with ExpectTimeout(5):
long_running_test() | Start
0 seconds have passed
1 seconds have passed
2 seconds have passed
3 seconds have passed
| MIT | docs/notebooks/ExpectError.ipynb | bjrnmath/debuggingbook |
Export JSON to TXT + CSVThe WE1S workflows use JSON format internally for manipulating data. However, you may wish to export JSON data from a project to plain text files with a CSV metadata file for use with other external tools.This notebook uses JSON project data to export a collection of plain txt files — one per JSON document — containing only the document contents field or bag of words. Each file is named with the name of the JSON document and a `.txt` extension.It also produces a `metadata.csv` file. This file contains a header and one row per document with the document filename plus required fields.Output from this notebook can be imported using the import module by copying the `txt.zip` and `metadata.csv` from `project_data/txt` to `project_data/import`. However, it is generally not recommended to export and then reimport data, as you may lose metadata in the process. Info__authors__ = 'Jeremy Douglass, Scott Kleinman' __copyright__ = 'copyright 2020, The WE1S Project' __license__ = 'MIT' __version__ = '2.6' __email__ = '[email protected]' SetupThis cell imports python modules and defines import file paths. | # Python imports
from pathlib import Path
from IPython.display import display, HTML
# Get path to project_dir
current_dir = %pwd
project_dir = str(Path(current_dir).parent.parent)
json_dir = project_dir + '/project_data/json'
config_path = project_dir + '/config/config.py'
export_script_path = 'scripts/json_to_txt_csv.py'
# Import the project configuration and classes
%run {config_path}
%run {export_script_path}
display(HTML('Ready!')) | _____no_output_____ | MIT | src/templates/v0.1.9/modules/export/json_to_txt_csv.ipynb | whatevery1says/we1s-templates |
ConfigurationThe default configuration assumes:1. There are JSON files in `project_data/json`.2. Each JSON has the required fields `pub_date`, `title`, `author`.3. Each JSON file has either: - a `content` field, or - a `bag_of_words` field created using the `import` module tokenizer (see the "Export Features Tables" section below to export text from the `features` field).By default, the notebook will export to `project_data/txt`. | limit = 10 # limit files exported -- 0 = unlimited.
txt_dir = project_dir + '/project_data/txt'
metafile = project_dir + '/project_data/txt/metadata.csv'
zipfile = project_dir + '/project_data/txt/txt.zip'
# The listed fields will be checked in order.
# The first one encountered will be the export content.
# Documents with no listed field will be excluded from export.
txt_content_fields = ['content', 'bag_of_words']
# The listed fields will be copied from json to metadata.csv columns
csv_export_fields = ['pub_date', 'title', 'author']
# Set to true to zip the exported text files and remove the originals
zip_output = True
# Delete any previous export contents in the `txt` directory, including `metadata` file and zip file
clear_cache = True | _____no_output_____ | MIT | src/templates/v0.1.9/modules/export/json_to_txt_csv.ipynb | whatevery1says/we1s-templates |
ExportStart the export. | # Optionally, clear the cache
if clear_cache:
clear_txt(txt_dir, metafile=metafile, zipfile=zipfile)
# Perform the export
json_to_txt_csv(json_dir=json_dir,
txt_dir=txt_dir,
txt_content_fields=txt_content_fields,
csv_export_fields=csv_export_fields,
metafile=metafile,
limit=limit)
# Inspect results
report_results(txt_dir, metafile)
# Optionally, zip the output
if zip_output:
zip_txt(txt_dir=txt_dir, zipfile=zipfile) | _____no_output_____ | MIT | src/templates/v0.1.9/modules/export/json_to_txt_csv.ipynb | whatevery1says/we1s-templates |
Export Features TablesIf your data contains features tables (lists of lists containing linguistic features), use the cell below to export features tables as CSV files for each document in your JSON folder. Set the `save_path` to a directory where you wish to save the CSV files. If you are using WE1S public data, this may apply to you. | # Configuration
save_path = ''
# Run the export
export_features_tables(save_path, json_dir) | _____no_output_____ | MIT | src/templates/v0.1.9/modules/export/json_to_txt_csv.ipynb | whatevery1says/we1s-templates |
import ipynb into other ipynb> 路過的神器,import ipynb form other ipynb[Source](https://stackoverflow.com/questions/20186344/importing-an-ipynb-file-from-another-ipynb-file) | # import ipynb.fs.full.try1 as try1
# try1.good() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
本 ipynb 的目標> 做 feature extracion 的 function Referance1. 品妤學姊碩論2. 清彥學長碩論3. 杰勳學長碩論4. This paper (science report, 2019)```A Machine Learning Approach forthe Identification of a Biomarker ofHuman Pain using fNIRS > Raul Fernandez Rojas1,9, Xu Huang1 & Keng-Liang Ou2,3,4,5,6,7,8```5. bbox --> annotation的bbox可以不用指定位置 | import os
import glob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# 老樣子,導到適合的資料夾
print(os.getcwd())
# path = 'C:\\Users\\BOIL_PO\\Desktop\\VFT(2)\\VFT'
# os.chdir(path)
all_csv = glob.glob('Filtered//*.csv')
all_csv[:5] | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
Time_Host 設成 index的原因:1. 可用loc切,即用index_name,可以準確地切30秒,不然用iloc還要算筆數舉例:`iloc` 取30秒,必須算 30秒有多少筆 `.iloc[:筆]``loc` 取30秒,打`[:30]`他會自己取 < 30的 index | check_df = pd.read_csv(all_csv[5], index_col= 'Unnamed: 0').drop(columns= ['Time_Arduino', 'easingdata'])
# print(check_df.dtypes)
check_df = check_df.set_index('Time_Host')
check_df.head()
# 讀了誰
cols = check_df.columns
print(check_df.columns)
# 畫圖確認
stage1 = 30
stage2 = 90
stage3 = 160
text_size = 25
plt.figure(figsize= (18, 14))
for i in range(int(len(check_df.columns)/2)):
plt.subplot(3, 1, i+1)
# 第一階段
plt.plot(check_df.loc[:stage1].index, check_df.loc[:stage1][cols[2*i]], c= 'b', linewidth=3.0, label= 'Rest')
plt.plot(check_df.loc[:stage1].index, check_df.loc[:stage1][cols[2*i+1]], c= 'r', linewidth=3.0, label= 'Rest')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(check_df.loc[stage1:stage2].index, check_df.loc[stage1:stage2][cols[2*i]], c= 'b', linewidth=3.0, label= 'Task')
plt.plot(check_df.loc[stage1:stage2].index, check_df.loc[stage1:stage2][cols[2*i+1]], c= 'r', linewidth=3.0, label= 'Task')
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(check_df.loc[stage2:stage3].index, check_df.loc[stage2:stage3][cols[2*i]], c= 'b', linewidth=3.0, label= 'Recovery')
plt.plot(check_df.loc[stage2:stage3].index, check_df.loc[stage2:stage3][cols[2*i+1]], c= 'r', linewidth=3.0, label= 'Recovery')
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(cols[2*i] + "+" + cols[2*i+1], fontdict={'fontsize': 24})
plt.tight_layout(pad= 3)
plt.show() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
濾波請用for> 一定要用for,不然是文組>> 用 for i in range(len(AA)) 還好,但若是後面沒用到`**位置**`資訊,都是 AA[i],那不是文組,但你寫的是C>> Python 的 for 是神>> for str 可以出字母,for list 可以出元素,for model 可以出layer,還有好用的list comprehension `[x**3 for i in range(10) if x%2 == 0]` Feature Extraction (From Lowpass filter) 清彥1. 階段起始斜率 (8s) $\checkmark$ * Task * Recovery> 2. 階段平均的差 $\checkmark$ * Task mean – Rest mean * Recovery mean – Rest mean * Task mean – Recovery mean >3. 階段峰值 $\checkmark$ * Task>4. 階段標準差 $\checkmark$ * 三個> 品妤>5. 階段平均 $\checkmark$ * 三個>6. 階段起始斜率 的差 $\checkmark$ * Task - Recovery 我 1. AUC--- 杰勳 bandpass1. Stage skewness2. Stage kurtosis | # 就重寫,沒意義
exam_df = pd.read_csv(all_csv[0], index_col= 'Unnamed: 0').drop(columns= ['Time_Arduino', 'easingdata'])
# print(exam_df.dtypes)
exam_df = exam_df.set_index('Time_Host')
exam_df.head() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段起始斜率 2*6= 120. 定義 階段開始前"八秒",單位 `?/S`1. return list2. 30~38 -> Task 3. 90~98 -> Recovery---- | def stage_begin_slope(dataframe, plot= False, figsize= (10, 6), use_col= 0):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of slope
# Tuple[1] : List of index
#=======================
slope_df = dataframe.loc[30:38]
slope12 = []
slope12_index = [col + "_Task_begin_slope" for col in slope_df.columns]
for i in range(len(slope_df.columns)):
a = (slope_df.iloc[-1, i] - slope_df.iloc[0, i])/8 #八秒
slope12.append(a)
slope_df34 = dataframe.loc[90:98]
slope34 = []
slope34_index = [col + "_stage_Recovery_slope" for col in slope_df34.columns]
for i in range(len(slope_df.columns)):
a = (slope_df34.iloc[-1, i] - slope_df34.iloc[0, i])/8 #八秒
slope34.append(a)
if plot == True:
#-------plot
plt.figure(figsize= figsize)
stage1 = 30
stage2 = 90
stage3 = 160
text_size = 25
xp1 = np.arange(30, 38, 0.1)
x1 = np.arange(0, 8, 0.1)
y1 = x1*slope12[use_col] + slope_df.iloc[0, use_col]
xp2 = np.arange(90, 98, 0.1)
x2 = np.arange(0, 8, 0.1)
y2 = x2*slope34[use_col] + slope_df34.iloc[0, use_col]
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.vlines(stage1 + 8, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.plot(xp1, y1, linewidth=5.0, c= 'r')
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.vlines(stage2 + 8, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.plot(xp2, y2, linewidth=5.0, c= 'r')
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_begin_slope", fontdict={'fontsize': 24})
plt.show()
return slope12 + slope34, slope12_index + slope34_index
# 畫看看
stage_begin_slope(exam_df, plot= True) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# stage_begin_slope(exam_df, plot= True, use_col= i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段平均 3*6 = 181. 0~30 -> Rest2. 30~90 -> Task3. 90~ 160 -> Recovery | def stage_mean(dataframe, plot= False, figsize= (10, 6), use_col= 0):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of mean
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
Rest = []
Task = []
Recovery = []
Rest_c = []
Task_c = []
Recovery_c = []
for col in dataframe.columns:
Rest.append(dataframe.loc[:stage1, col].mean()) #pandas 有 .mean() 可以用
Rest_c.append(col + '_Rest_mean')
Task.append(dataframe.loc[stage1:stage2, col].mean())
Task_c.append(col + '_Task_mean')
Recovery.append(dataframe.loc[stage2:stage3, col].mean())
Recovery_c.append(col + '_Recovery_mean')
if plot == True:
#-------plot
plt.figure(figsize= figsize)
text_size = 25
xp1 = np.arange(0, stage1, 0.1)
y1 = np.full(xp1.shape, Rest[use_col])
xp2 = np.arange(stage1, stage2, 0.1)
y2 = np.full(xp2.shape, Task[use_col])
xp3 = np.arange(stage2, stage3, 0.1)
y3 = np.full(xp3.shape, Recovery[use_col])
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.plot(xp1, y1, linewidth=5.0, c= 'r')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.plot(xp2, y2, linewidth=5.0, c= 'r')
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(xp3, y3, linewidth=5.0, c= 'r')
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_mean", fontdict={'fontsize': 24})
plt.show()
return Rest + Task + Recovery, Rest_c + Task_c + Recovery_c
| _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# stage_mean(exam_df, plot= True, use_col=i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段平均的差 -> 2*6 = 12 * Task mean – Rest mean * Task mean – Recovery mean 活化值 -> 1*6 * Recovery mean – Rest mean | def stage_mean_diff(dataframe, plot= False, figsize= (10, 6), use_col= 0):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of mean diff or activation
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
Task_Rest = []
Recovery_Rest = []
Task_recovery = []
Task_Rest_c = []
Recovery_Rest_c = []
Task_recovery_c = []
for col in dataframe.columns:
# 階段平均差
Task_Rest.append(dataframe.loc[stage1:stage2, col].mean() - dataframe.loc[:stage1, col].mean())
Task_Rest_c.append(col + '_Task_m_Rest')
Task_recovery.append(dataframe.loc[stage1:stage2, col].mean() - dataframe.loc[stage2:stage3, col].mean())
Task_recovery_c.append(col + '_Task_m_Recovery')
# 活化值
Recovery_Rest.append(dataframe.loc[stage2:stage3, col].mean() - dataframe.loc[:stage1, col].mean())
Recovery_Rest_c.append(col + '_Recovery_Rest_Activation')
if plot == True:
import matplotlib.patches as patches
Rest = []
Task = []
Recovery = []
Rest_c = []
Task_c = []
Recovery_c = []
for col in dataframe.columns:
Rest.append(dataframe.loc[:stage1, col].mean())
Rest_c.append(col + '_Rest_mean')
Task.append(dataframe.loc[stage1:stage2, col].mean())
Task_c.append(col + '_Task_mean')
Recovery.append(dataframe.loc[stage2:stage3, col].mean())
Recovery_c.append(col + '_Recovery_mean')
#-------plot
plt.figure(figsize= figsize)
text_size = 25
xp1 = np.arange(0, stage1, 0.1)
y1 = np.full(xp1.shape, Rest[use_col])
xp2 = np.arange(stage1, stage2, 0.1)
y2 = np.full(xp2.shape, Task[use_col])
xp3 = np.arange(stage2, stage3, 0.1)
y3 = np.full(xp3.shape, Recovery[use_col])
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.plot(xp1, y1, linewidth=3.0, c= 'r')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=1.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.plot(xp2, y2, linewidth=3.0, c= 'r')
plt.annotate(s='', xy=(stage1 + 2, Task[use_col] - 0.03), xytext=(stage1 + 2, Rest[use_col] +0.03), arrowprops=dict(arrowstyle='<->', mutation_scale=10, color= 'k', linewidth= 5))
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=1.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(xp3, y3, linewidth=3.0, c= 'r')
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.annotate(s='', xy=(stage2 + 2, Recovery[use_col] - 0.03), xytext=(stage2 + 2, Task[use_col] +0.03),arrowprops=dict(arrowstyle='<->', mutation_scale=10, color= 'k', linewidth= 5))
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_mean_diff", fontdict={'fontsize': 24})
plt.show()
return Task_Rest + Recovery_Rest + Task_recovery, Task_Rest_c + Recovery_Rest_c + Task_recovery_c | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫畫看 channel | stage_mean_diff(exam_df, plot= True, use_col= 4) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段峰值 1*6 = 6 * Task | def stage_acivation(dataframe, plot= False, figsize= (10, 6), use_col= 0):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of 峰值
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
diffs = []
diffs_name = []
for cols in dataframe.columns:
diff = dataframe.loc[stage1:stage2, cols].max() - dataframe.loc[stage1:stage2, cols].min()
diffs.append(diff)
diffs_name.append(cols + "_stage_activation")
if plot == True:
#-------plot
plt.figure(figsize= figsize)
text_size = 25
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.hlines(dataframe.loc[stage1:stage2, dataframe.columns[use_col]].min(), stage1, stage2, linestyles= '-', colors= 'black', linewidth=5.0)
plt.hlines(dataframe.loc[stage1:stage2, dataframe.columns[use_col]].max(), stage1, stage2, linestyles= '-', colors= 'black', linewidth=5.0)
plt.annotate(s='', xy=( (stage1 + stage2)/2, dataframe[dataframe.columns[use_col]].loc[stage1:stage2].min()), xytext=( (stage1 + stage2)/2, dataframe[dataframe.columns[use_col]].loc[stage1:stage2].max()),arrowprops=dict(arrowstyle='<->', mutation_scale=10, color= 'k', linewidth= 5))
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_acivation", fontdict={'fontsize': 24})
plt.show()
return diffs, diffs_name
| _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# stage_acivation(exam_df, plot= True, use_col= i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段標準差 * 三個 標準差不能歸一化 | def stage_std(dataframe, plot= False, figsize= (10, 6), use_col= 0):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of std
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
Rest_std = []
Task_std = []
Recovery_std = []
Rest_std_c = []
Task_std_c = []
Recovery_std_c = []
for col in dataframe.columns:
Rest_std.append(dataframe.loc[:stage1, col].std()) # 簡單方便 .std
Rest_std_c.append(col + '_Rest_std')
Task_std.append(dataframe.loc[stage1:stage2, col].std())
Task_std_c.append(col + '_Task_std')
Recovery_std.append(dataframe.loc[stage2:stage3, col].std())
Recovery_std_c.append(col + '_Recovery_std')
if plot == True:
Rest = []
Task = []
Recovery = []
Rest_c = []
Task_c = []
Recovery_c = []
for col in dataframe.columns:
Rest.append(dataframe.loc[:stage1, col].mean())
Rest_c.append(col + '_Rest_mean')
Task.append(dataframe.loc[stage1:stage2, col].mean())
Task_c.append(col + '_Task_mean')
Recovery.append(dataframe.loc[stage2:stage3, col].mean())
Recovery_c.append(col + '_Recovery_mean')
#-------plot
plt.figure(figsize= figsize)
text_size = 25
xp1 = np.arange(0, stage1, 0.1)
y1 = np.full(xp1.shape, Rest[use_col])
xp2 = np.arange(stage1, stage2, 0.1)
y2 = np.full(xp2.shape, Task[use_col])
xp3 = np.arange(stage2, stage3, 0.1)
y3 = np.full(xp3.shape, Recovery[use_col])
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.plot(xp1, y1, linewidth=5.0, c= 'r')
plt.errorbar((stage1)/2, Rest[use_col], Rest_std[use_col], linestyle='-', marker='^', elinewidth= 3, ecolor= 'k', capsize= 10)
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.plot(xp2, y2, linewidth=5.0, c= 'r')
plt.errorbar((stage1 + stage2)/2, Task[use_col], Task_std[use_col], linestyle='-', marker='^', elinewidth= 3, ecolor= 'k', capsize= 10)
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(xp3, y3, linewidth=5.0, c= 'r')
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.errorbar((stage3 + stage2)/2, Recovery[use_col], Recovery_std[use_col], linestyle='-', marker='^', elinewidth= 3, ecolor= 'k', capsize= 10)
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_std", fontdict={'fontsize': 24})
plt.show()
return Rest_std + Task_std + Recovery_std, Rest_std_c + Task_std_c + Recovery_std_c | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# stage_std(exam_df, plot= True, use_col= i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
階段起始斜率 的差 * Task - Recovery | def stage_begin_slope_diff(dataframe):
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of slope diff
# Tuple[1] : List of index
#=======================
slope_df = dataframe.loc[30:38]
slope12 = []
for i in range(len(slope_df.columns)):
a = (slope_df.iloc[-1, i] - slope_df.iloc[0, i])/8 #八秒
slope12.append(a)
slope_df34 = dataframe.loc[90:98]
slope34 = []
for i in range(len(slope_df.columns)):
a = (slope_df34.iloc[-1, i] - slope_df34.iloc[0, i])/8 #八秒
slope34.append(a)
colset = []
for col in dataframe.columns:
colset.append(col + "_Task_Recovery_begin_slope_diff")
slope_diff = np.array(slope12) - np.array(slope34)
return list(slope_diff), colset
stage_begin_slope_diff(exam_df) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
Stage skewness -> use scipy * 三個階段> 資料分布靠左"正">> 資料分布靠右"負" [好用圖中圖](https://www.itread01.com/p/518289.html) | def stage_skew(dataframe, plot= False, figsize= (10, 6), use_col= 0):
from scipy.stats import skew
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of skew
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
text_size = 25
rest_skew = []
task_skew = []
recovery_skew = []
rest_skew_c = []
task_skew_c = []
recovery_skew_c = []
for cols in dataframe.columns:
rest_skew.append(skew(dataframe.loc[:stage1, cols]))
rest_skew_c.append(cols + '_rest_skew')
task_skew.append(skew(dataframe.loc[stage1:stage2, cols]))
task_skew_c.append(cols + '_task_skew')
recovery_skew.append(skew(dataframe.loc[stage2:stage3, cols]))
recovery_skew_c.append(cols + '_recovery_skew')
if plot == True:
#-------plot
plt.figure(figsize= figsize)
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
plt.axvspan(0, stage1, facecolor=sns.color_palette('Paired')[0], alpha=0.5)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
plt.axvspan(stage1, stage2, facecolor=sns.color_palette('Paired')[1], alpha=0.5)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
plt.title(dataframe.columns[use_col] + "_stage_skew", fontdict={'fontsize': 24})
plt.axes([0.65, 0.2, 0.2, 0.2])
sns.histplot(dataframe.loc[stage1:stage2, dataframe.columns[use_col]], bins= 30)
plt.title("Task skew", fontdict={'fontsize': 13})
plt.show()
return rest_skew + task_skew + recovery_skew, rest_skew_c + task_skew_c + recovery_skew_c
| _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# a = stage_skew(exam_df, plot= True, use_col= i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
Stage kurtosis 峰度(尖度) * 三個 | def stage_kurtosis(dataframe):
from scipy.stats import kurtosis
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of kurtosis
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
text_size = 25
rest_skew = []
task_skew = []
recovery_skew = []
rest_skew_c = []
task_skew_c = []
recovery_skew_c = []
for cols in dataframe.columns:
rest_skew.append(kurtosis(dataframe.loc[:stage1, cols]))
rest_skew_c.append(cols + '_rest_kurtosis')
task_skew.append(kurtosis(dataframe.loc[stage1:stage2, cols]))
task_skew_c.append(cols + '_task_kurtosis')
recovery_skew.append(kurtosis(dataframe.loc[stage2:stage3, cols]))
recovery_skew_c.append(cols + '_recovery_kurtosis')
return rest_skew + task_skew + recovery_skew, rest_skew_c + task_skew_c + recovery_skew_c
stage_kurtosis(dataframe= exam_df) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
AUC -> use sklearn * 三個1. 看了很多,好比說scipy.integrate, numpy.trap2. 還是 sklearn的好用,(這邊其他的也可以試試不強制) | def stage_auc(dataframe, plot= False, figsize= (10, 6), use_col= 0):
from sklearn.metrics import auc
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of auc
# Tuple[1] : List of index
#=======================
stage1 = 30
stage2 = 90
stage3 = 160
rest_auc = []
Task_auc = []
recovery_auc = []
rest_auc_c = []
Task_auc_c = []
recovery_auc_c = []
for cols in dataframe.columns:
rest_auc.append(auc(dataframe.loc[:stage1, cols].index, dataframe.loc[:stage1, cols]))
rest_auc_c.append(cols + '_rest_auc')
Task_auc.append(auc(dataframe.loc[stage1:stage2, cols].index, dataframe.loc[stage1:stage2, cols]))
Task_auc_c.append(cols + '_Task_auc')
recovery_auc.append(auc(dataframe.loc[stage2:stage3, cols].index, dataframe.loc[stage2:stage3, cols]))
recovery_auc_c.append(cols + '_recovery_auc')
if plot == True:
#-------plot
plt.figure(figsize= figsize)
plt.plot(dataframe.loc[:stage1].index, dataframe.loc[:stage1, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Rest')
yy1 = dataframe.loc[0:stage1, dataframe.columns[use_col]]
plt.fill_between(np.linspace(0, stage1, yy1.shape[0]), yy1, step="pre", facecolor=sns.color_palette('Paired')[0], y2=-0.1)
plt.vlines(stage1, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text(stage1/2, 1.2, "rest", size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第二階段
plt.plot(dataframe.loc[stage1:stage2].index, dataframe.loc[stage1:stage2, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Task')
yy2 = dataframe.loc[stage1:stage2, dataframe.columns[use_col]]
plt.fill_between(np.linspace(stage1, stage2, yy2.shape[0]), yy2, step="pre", facecolor=sns.color_palette('Paired')[1], y2=-0.1)
plt.vlines(stage2, -0.1, 1.3, linestyles= '--', colors= 'black', linewidth=2.0)
plt.text((stage2 + stage1)/2, 1.2, 'Task', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
# 第三階段
plt.plot(dataframe.loc[stage2:stage3].index, dataframe.loc[stage2:stage3, dataframe.columns[use_col]], c= 'b', linewidth=2.0, label= 'Recovery')
# plt.axvspan(stage2, stage3, facecolor=sns.color_palette('Paired')[2], alpha=0.75)
plt.text((stage3 + stage2)/2, 1.2, 'Recovery', size= text_size, ha="center", va= 'center', bbox=dict(boxstyle="round",ec=(1., 0.5, 0.5),fc=(1., 0.8, 0.8),))
yy3 = dataframe.loc[stage2:stage3, dataframe.columns[use_col]]
plt.fill_between(np.linspace(stage2, stage3, yy3.shape[0]), yy3, step="pre", facecolor=sns.color_palette('Paired')[2], y2=-0.1)
plt.title(dataframe.columns[use_col] + "_stage_auc", fontdict={'fontsize': 24})
plt.show()
return rest_auc + Task_auc + recovery_auc, rest_auc_c + Task_auc_c + recovery_auc_c
| _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
畫全部 channel | # for i in range(6):
# stage_auc(exam_df, plot=True, use_col= i) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
FFT1. 取樣頻率要是頻率的兩倍 待釐清1. 要取 "全部"、"還是三階段**各自**的"、"還是Task就好的" fft > 目前是想說,既然訊號是連續三個階段一次做完的,生理訊號的頻率應該也會持續,所以取三個階段各自的沒意義,一次去取全部較好2. 平方 -> 是一個叫做 PSD 的東西`fft_ps = np.abs(fft_window)**2` > referance[1. ML Fundamentals](https://ataspinar.com/2018/04/04/machine-learning-with-signal-processing-techniques/) -> use scipy[2. stackoverflow](https://stackoverflow.com/questions/45863400/python-fft-for-feature-extraction) -> use numpy 以 全時域(0~160s) 資料下去做 fft | # 第一行
y = exam_df.iloc[:, 0].values | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
stack overflow -> numpy | # stack overflow
import numpy as np
sample_rate = 24
N = np.array(y).shape[-1]
# 從 0 ~ 12Hz 取 N/個
fft_window = np.fft.fft(y)
freq = np.fft.fftfreq(N, d=1/24)
# 為啥要平方??
fft_ps = np.abs(fft_window)**2
fft_window.shape, freq.shape, freq.max(), freq.min()
plt.plot(freq) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
0.12Hz (cut down frequency)之後decay很快,看起來合理 | import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14, 7))
plt.plot(freq, 2.0/N *np.abs(fft_window), label= 'FFT')
# plt.plot(freq, np.log10(fft_ps))
plt.ylim(0, 0.08)
plt.xlim(0.005, 0.4)
plt.vlines(0.12, 0, 100, colors= 'r', linestyles= '--', label= 'Cut down Freq (low pass)', )
plt.xlabel("Frequency")
plt.ylabel("Amplitude")
plt.annotate("0.12", (0.110, 0.05), fontsize= 20, bbox=dict(boxstyle="round", ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
plt.title('FFt', fontsize= 20)
plt.legend()
plt.show() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
ML Fundamentals -> scipy | from scipy.fft import fft
def get_fft_values(y_values, T, N, f_s):
f_values = np.linspace(0.0, 1.0/(2.0*T), N//2)
fft_values_ = fft(y_values)
# 歸一化嗎??
fft_values = 2.0/N * np.abs(fft_values_[0:N//2])
return f_values, fft_values
f_s = 24
T = 1/f_s
N = np.array(y).shape[-1]
f_values, fft_values = get_fft_values(y, T, N, f_s)
plt.figure(figsize= (14, 7))
plt.plot(f_values, fft_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]', fontsize=16)
plt.ylabel('Amplitude', fontsize=16)
plt.title("Frequency domain of the signal", fontsize=16)
plt.vlines(0.12, 0, 0.085, colors= 'r', linestyles= '--', label= 'Cut down Freq (low pass)', )
plt.annotate("0.12", (0.110, 0.05), fontsize= 20, bbox=dict(boxstyle="round", ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
plt.ylim(0, 0.08)
plt.xlim(0.005, 0.4)
plt.show() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
找峰值 -> scipy > 下面網站有很多方法[1. 好用網站](https://www.delftstack.com/zh-tw/howto/python/find-peaks-in-python/) | import numpy as np
from scipy.signal import argrelextrema
peaks = argrelextrema(fft_values, np.greater)
print(peaks)
f_values[5], fft_values[5]
for ind in peaks[0]:
print(f_values[ind], fft_values[ind])
peaks[0]
plt.figure(figsize= (14, 7))
plt.plot(f_values, fft_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]', fontsize=16)
plt.ylabel('Amplitude', fontsize=16)
plt.title("Frequency domain of the signal", fontsize=16)
plt.vlines(0.12, 0, 0.085, colors= 'r', linestyles= '--', label= 'Cut down Freq (low pass)', )
plt.annotate("0.12", (0.110, 0.05), fontsize= 20, bbox=dict(boxstyle="round", ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
for ind in peaks[0]:
plt.annotate("peak", (f_values[ind]-0.005, fft_values[ind]), bbox=dict(boxstyle="Circle", alpha= 0.4, ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
plt.ylim(0, 0.08)
plt.xlim(0.005, 0.4)
plt.show()
# 留0.12 以下 ?
save_index = [x for x in peaks[0] if f_values[x] <= 0.12]
print(save_index)
# 直接找前3名
# np.argsort ??
use_ind = np.argsort(fft_values[peaks[0]])[-3:][::-1]
real_ind = peaks[0][use_ind]
real_ind
whole = list(zip(f_values[real_ind], fft_values[real_ind]))
whole
plt.figure(figsize= (14, 7))
plt.plot(f_values, fft_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]', fontsize=16)
plt.ylabel('Amplitude', fontsize=16)
plt.title("Frequency domain of the signal", fontsize=16)
plt.vlines(0.12, 0, 0.085, colors= 'r', linestyles= '--', label= 'Cut down Freq (low pass)', )
plt.annotate("0.12", (0.110, 0.05), fontsize= 20, bbox=dict(boxstyle="round", ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
for i, val in enumerate(whole):
plt.annotate(f"First {i+1} peak", (val[0]+0.005, val[1]),size=10, bbox=dict(boxstyle="LArrow", alpha= 0.5, ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
plt.ylim(0, 0.08)
plt.xlim(0.005, 0.4)
plt.show() | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
FFT * 一張圖 3 個峰值,共有六種血氧(3channel * 含氧/缺氧) * 一個峰值會有兩數值,一個是 amp,一個是 峰值的頻率 | def FFT(dataframe, f_s = 24, plot= False):
from scipy.fft import fft
import numpy as np
from scipy.signal import argrelextrema
#============================
# Parameter:
# dataframe: input dataframe
# plot : whether to plot
# figsize: plt.figure(figsize= figsize)
# Return:
# Tuple:
# Tuple[0] : List of fft
# Tuple[1] : List of index
#=======================
save_fft = []
save_fft_index = []
# column 0 fft
for colss in dataframe.columns:
y = dataframe.loc[:, colss].values
def get_fft_values(y_values, T, N, f_s):
f_values = np.linspace(0.0, 1.0/(2.0*T), N//2)
fft_values_ = fft(y_values)
# 歸一化嗎??
fft_values = 2.0/N * np.abs(fft_values_[0:N//2])
return f_values, fft_values
f_s = f_s
T = 1/f_s
N = np.array(y).shape[-1]
f_values, fft_values = get_fft_values(y, T, N, f_s)
peaks = argrelextrema(fft_values, np.greater)
# print(peaks)
use_ind = np.argsort(fft_values[peaks[0]])[-3:][::-1]
real_ind = peaks[0][use_ind]
whole = list(zip(f_values[real_ind], fft_values[real_ind]))
whole = list(np.array(whole).ravel())
save_fft += whole
save_fft_index += [f'{colss} First Freq', f'{colss} First Amp', f'{colss} Second Freq', f'{colss} Second Amp', f'{colss} Third Freq', f'{colss} Third Amp']
if plot:
plt.figure(figsize= (14, 7))
plt.plot(f_values, fft_values, linestyle='-', color='blue')
plt.xlabel('Frequency [Hz]', fontsize=16)
plt.ylabel('Amplitude', fontsize=16)
plt.title(f"Frequency domain of the {colss} signal", fontsize=16)
plt.vlines(0.12, 0, 0.15, colors= 'r', linestyles= '--', label= 'Cut down Freq (low pass)', )
plt.annotate("0.12", (0.11, 0.1), fontsize= 20, bbox=dict(boxstyle="round", ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
for ind in peaks[0]:
plt.annotate("peak", (f_values[ind]-0.005, fft_values[ind]), bbox=dict(boxstyle="Circle", alpha= 0.4, ec=(1., 0.5, 0.5), fc=(1., 0.8, 0.8),))
plt.ylim(0, 0.15)
plt.xlim(0.005, 0.4)
plt.show()
return save_fft, save_fft_index
df = pd.read_csv(all_csv[5])
df = df.drop(columns= ['Unnamed: 0', 'Time_Arduino', 'easingdata'])
df = df.set_index('Time_Host')
FFT(df, plot= True) | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
顛倒ndarray -> 帥`fft_values[real_ind][::-1]` Numpy 神技 | test= np.arange(1, 10)
test
test[::-1]
test[::-2]
test[::-3]
test[::1]
test[::2]
test[::3] | _____no_output_____ | MIT | fNIRS signal analysis/Randonstate/Extraction.ipynb | JulianLee310514065/Complete-Project |
Danger zone. This notebook is just here to be convenient for development | %matplotlib ipympl
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
%load_ext autoreload
%autoreload 2
from mpl_interactions import *
plt.close()
fig, ax = plt.subplots()
zoom_factory(ax)
ph = panhandler(fig)
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N)) ** 2 # 0 to 15 point radii
scat = plt.scatter(x, y, s=area, c=colors, alpha=0.5, label="yikes", cmap="viridis")
plt.legend()
plt.show()
x_new = np.random.randn(N + 1000)
y_new = np.random.randn(N + 1000)
new = np.array([x_new, y_new]).T
scat.set_offsets(new)
def f(mean):
"""
should be able to return either:
x, y
or arr where arr.shape = (N, 2 )
I should check that
"""
print(mean)
x = np.random.rand(N) * mean
y = np.random.rand(N) * mean
return x, y
fig, ax = plt.subplots()
zoom_factory(ax)
ph = panhandler(fig)
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N)) ** 2 # 0 to 15 point radii
scat = plt.scatter(x, y, s=area, c=colors, alpha=0.5, label="yikes")
plt.legend()
plt.show()
slider = widgets.FloatSlider(min=-0.5, max=1.5, step=0.01)
ax.plot([-10, 10], [0, 10])
def update(change):
# print(change)
out = f(change["new"])
out = np.asanyarray(out)
if out.shape[0] == 2 and out.shape[1] != 2:
# check if transpose is necessary
# but not way to check if shape is 2x2
out = out.T
# print(out.shape)
scat.set_offsets(out)
# ax.ignore_existing_data_limits = True
ax.update_datalim(scat.get_datalim(ax.transData))
ax.autoscale_view()
fig.canvas.draw()
slider.observe(update, names=["value"])
slider
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display
from ipywidgets import widgets
x = np.arange(10)
fig, ax = plt.subplots()
scatter = ax.scatter(x, x, label="y = a*x+b")
ax.legend()
line = ax.plot([-10, 10], [0, 1])[0]
def update_plot(a, b):
y = a * x + b
scatter.set_offsets(np.c_[x, y])
line.set_data(x - 3, y)
ax.relim()
ax.ignore_existing_data_limits = True
ax.update_datalim(scatter.get_datalim(ax.transData))
ax.autoscale_view()
fig.canvas.draw_idle()
a = widgets.FloatSlider(min=0.5, max=4, value=1, description="a:")
b = widgets.FloatSlider(min=0, max=40, value=10, description="b:")
widgets.interactive(update_plot, a=a, b=b)
N = 50
def f(mean):
x = np.random.rand(N) + mean
y = np.random.rand(N) + mean
return x, y
def f2(mean):
x = np.random.rand(N) - mean
y = np.random.rand(N) - mean
return x, y
blarg = interactive_scatter([f, f2], mean=(0, 1, 100), c=[np.random.randn(N), np.random.randn(N)])
N = 50
def f(mean):
x = np.random.rand(N) + mean - 0.5
y = np.random.rand(N) + mean - 0.5
return x, y
def c_func(x, y, mean):
return x
def s_func(x, y, mean):
return 40 / x
def ec_func(x, y, mean):
if np.random.rand() > 0.5:
return "black"
else:
return "red"
blarg = interactive_scatter(f, mean=(0, 1, 100), c=c_func, s=s_func, alpha=0.9, edgecolors=ec_func)
def alpha_func(mean):
return mean / 1
blarg2 = interactive_scatter(
(x, y), mean=(0, 1, 100), c=c_func, s=s_func, alpha=alpha_func, edgecolors=ec_func
)
N = 500
def f(mean):
x = (np.random.rand(N) - 0.5) + mean
y = 10 * (np.random.rand(N) - 0.5) + mean
return x, y
(x, y) = f(0.5)
def threshold(x, y, mean):
colors = np.zeros((len(x), 4))
colors[:, -1] = 1
deltas = np.abs(y - mean)
idx = deltas < 0.01
deltas /= deltas.max()
colors[~idx, -1] = np.clip(0.8 - deltas[~idx], 0, 1)
# print(colors)
return colors
blarg2 = interactive_scatter((x, y), mean=(0, 1, 100), c=threshold)
from inspect import signature
def someMethod(arg1, kwarg1=None):
pass
sig = signature(someMethod)
len(sig.parameters)
from matplotlib.colors import is_color_like
is_color_like(threshold(x, y, 4)[0])
scats.setscat.cmap([[0], [1], [23]]).shape
scat.cmap??
from matplotlib import colors as mcolors
mcolors.to_rgba_array("red") | _____no_output_____ | BSD-3-Clause | docs/examples/devlop/devlop-scatter.ipynb | samwelborn/mpl-interactions |
EDA: Named Entity RecognitionNamed entity recognition is the process of identifing particular elements from text, such as names, places, quantities, percentages, times/dates, etc. Identifying and quantifying what the general content types an article contains seems like a good predictor of what type of article it is. World news articles, for example, might mention more places than opinion articles, and business articles might have more percentages or dates than other sections. For each article, I'll count how many total mentions of people or places there are in the titles, as well as how many unique mentions for article bodies.The Stanford NLP group has published three [Named-Entity Recognizers](http://nlp.stanford.edu/software/CRF-NER.shtml). The three class model recognizes locations, persons, and organizations, and at least for now, this is the one I'll be using. Although NER's are written in Java, there is the Pyner interface for Python, as well as an NLTK wrapper (which I'll be using).Although state-of-the-art taggers can achieve near-human levels of accuracy, this one does make a few mistakes. One obvious flaw is that if I feed the tagger unigram terms, two-part names such as "Michael Jordan" will count as ("Michael", "PERSON") and ("Jordan", "PERSON"). I can roughly correct for this by dividing my average name entity count by two if need be. Additionally, sometimes the tagger mis-tags certain people or places. For instance, it failed to recognize "Cameroon" as a location, but tagged the word "Heartbreak" in the article title "A Personal Trainer for Heartbreak" as a person. That being said, let's see what it can do on my news data. | import articledata
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import operator
data = pd.read_pickle('/Users/teresaborcuch/capstone_project/notebooks/pickled_data.pkl') | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
Counting Named EntitiesHere is my count_entities function. The idea is to count the total mentions of a person or a place in an article's body or title and save them as columns in my existing data structure. | def count_entities(data = None, title = True):
# set up tagger
os.environ['CLASSPATH'] = "/Users/teresaborcuch/stanford-ner-2013-11-12/stanford-ner.jar"
os.environ['STANFORD_MODELS'] = '/Users/teresaborcuch/stanford-ner-2013-11-12/classifiers'
st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz')
tagged_titles = []
persons = []
places = []
if title:
for x in data['title']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
tagged_titles.append(tags)
for pair_list in tagged_titles:
person_count = 0
place_count = 0
for pair in pair_list:
if pair[1] == 'PERSON':
person_count +=1
elif pair[1] == 'LOCATION':
place_count +=1
else:
continue
persons.append(person_count)
places.append(place_count)
data['total_persons_title'] = persons
data['total_places_title'] = places
else:
for x in data['body']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
tagged_titles.append(tags)
for pair_list in tagged_titles:
person_count = 0
place_count = 0
for pair in pair_list:
if pair[1] == 'PERSON':
person_count +=1
elif pair[1] == 'LOCATION':
place_count +=1
else:
continue
persons.append(person_count)
places.append(place_count)
data['total_persons_body'] = persons
data['total_places_body'] = places
return data
# Count people and places in article titles and save as new columns
# Warning - this is super slow!
data = articledata.count_entities(data = data, title = True)
data.head(1)
# pickle the file to avoid having to re-run this for future analyses
data.to_pickle('/Users/teresaborcuch/capstone_project/notebooks/ss_entity_data.pkl')
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize = (12, 5))
ax1 = fig.add_subplot(1,2,1)
ax1.hist(data['total_persons_title'])
ax1.set_xlabel("Total Person Count in Article Titles ")
ax1.set_ylim(0,2500)
ax1.set_xlim(0,6)
ax2 = fig.add_subplot(1,2,2)
ax2.hist(data['total_places_title'])
ax2.set_xlabel("Total Place Count in Article Titles")
ax2.set_ylim(0, 2500)
ax2.set_xlim(0,6)
plt.show() | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
These graphs indicate that person and place counts from article are both strongly right skewed. It might be more interesting to compare mean person and place counts among different sections. | data.pivot_table(
index = ['condensed_section'],
values = ['total_persons_title', 'total_places_title']).sort_values('total_persons_title', ascending = False) | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
From this pivot table, it seems there are a few distinctions to be made between different sections. Entertainment and sports contain more person mentions on average than any other sections, and world news contains more places in the title than other sections. Finding Common Named EntitiesNow, I'll try to see which people are places get the most mentions in each section. I've written an evaluate_entities function that creates a dictionary of counts for each unique person or place in a particular section or for a particular source. | def evaluate_entities(data = None, section = None, source = None):
section_mask = (data['condensed_section'] == section)
source_mask = (data['source'] == source)
if section and source:
masked_data = data[section_mask & source_mask]
elif section:
masked_data = data[section_mask]
elif source:
masked_data = data[source_mask]
else:
masked_data = data
# set up tagger
os.environ['CLASSPATH'] = "/Users/teresaborcuch/stanford-ner-2013-11-12/stanford-ner.jar"
os.environ['STANFORD_MODELS'] = '/Users/teresaborcuch/stanford-ner-2013-11-12/classifiers'
st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz')
# dictionaries to hold counts of entities
person_dict = {}
place_dict = {}
for x in masked_data['body']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
for pair in tags:
if pair[1] == 'PERSON':
if pair[0] not in person_dict.keys():
person_dict[pair[0]] = 1
else:
person_dict[pair[0]] +=1
elif pair[1] == 'LOCATION':
if pair[0] not in place_dict.keys():
place_dict[pair[0]] = 1
else:
place_dict[pair[0]] += 1
return person_dict, place_dict | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
Commonly Mentioned People in World News and Entertainment | world_persons, world_places = articledata.evaluate_entities(data = data, section = 'world', source = None)
# get top 20 people from world news article bodies
sorted_wp = sorted(world_persons.items(), key=operator.itemgetter(1))
sorted_wp.reverse()
sorted_wp[:20] | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
Perhaps as expected, Trump is the most commonly mentioned person in world news, with 1,237 mentions in 467 articles, with Obama and Putin coming in second and third. It's interesting to note that most of these names are political figures, but since the tagger only receives unigrams, partial names and first names are mentioned as well. | entertainment_persons, entertainment_places = articledata.evaluate_entities(data = data, section = 'entertainment', source = None)
sorted_ep = sorted(entertainment_persons.items(), key=operator.itemgetter(1))
sorted_ep.reverse()
sorted_ep[:20] | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
Now, I'll compare the top 20 people mentioned in entertainment articles. Trump still takes the number one spot, but interestingly, he's followed by a string of first names. NLTK provides a corpus of male and female-tagged first names, so counting the number of informal mentions or even the ratio of men to women might be a useful feature for classifying articles. Commonly Mentioned Places in World News and EntertainmentCompared to those from the world news section, the locations in the entertainment section are mostly in the United States: New York City (pieced together from "New", "York", and "City") seems to be the most common, but Los Angeles, Manhattan, and Chicago also appear. There are a few international destinations (fashionable ones like London and Paris and their respective countries), but nowhere near as many as in the world news section, where, after the U.S, Iran, China, and Russia take the top spots. | # get top 20 places from world news article bodies
sorted_wp = sorted(world_places.items(), key=operator.itemgetter(1))
sorted_wp.reverse()
sorted_wp[:20]
# get top 20 places from entertainment article bodies
sorted_ep = sorted(entertainment_places.items(), key=operator.itemgetter(1))
sorted_ep.reverse()
sorted_ep[:20] | _____no_output_____ | MIT | notebooks/ner_blog_post.ipynb | teresaborcuch/teresaborcuch.github.io |
Analysis of Red Wine Quality Index 1. Reading the data and importing the libraries 2. EDA 3. Correlation Matrix 4. Modeling - Linear Model - Weighted KNN - Random Forest - Conditional Inference Random Forest - Decision Tree Model 5. Modeling Results Table 6. Conclusion 1. Reading the data and importing the libraries | library(tidyverse)
library(grid)
library(gridExtra)
library(e1071)
library(caret)
df1 <- read.csv("C:/Users/kausha2/Documents/Data Analytics/DataSets/winequality/winequality/winequality-red.csv", sep = ";")
head(df1)
summary(df1$quality) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
Creating a new variable --> WineAttribute : Good (1) or bad (0) for binary classification | df1$wine_attribute <- ifelse(df1$quality > 5, 1, 0 )
head(df1) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
2. EDA How is the wine quality distributed? | qplot(df1$quality, geom="histogram", binwidth = 1) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- The dataset is dominated by values 5 and 6. There are less wines with a quality of 4 and 7 whereas there are hardly any wines that have values less 3 and 8- there are two options : either split the quality variable into 3 parts by quantiles : top 20, middle 60 and bottom 20 or split based on the mean i.e. Good wines are those which have values >5 and bad wines are those with values less or equal to 5 Looking at the different histograms to check the shape of the distributions | p1 <- qplot(df1$pH, geom="histogram", binwidth = 0.05)
p2 <- qplot(df1$alcohol, geom="histogram",binwidth = 0.099)
p3 <- qplot(df1$volatile.acidity, geom="histogram",binwidth = 0.05)
p4 <- qplot(df1$citric.acid, geom="histogram",binwidth = 0.05)
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- We see that pH looks normally distributed - Volatile acidity, Alcohol and citric acid have a positive skew shape but dont seem to follow a particular distribution | p1 <- qplot(df1$residual.sugar, geom="histogram", binwidth = 0.1)
p2 <- qplot(df1$chlorides, geom="histogram",binwidth = 0.01)
p3 <- qplot(df1$density, geom="histogram",binwidth = 0.001)
p4 <- qplot(df1$free.sulfur.dioxide, geom="histogram",binwidth = 1)
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- Density seems to follow a normal distribution. - Residual sugar and chlorides seem to follow a normal distribution initially but flatten out later- Free sulfur dioxide content seems to have a positive skew shaped distribution | p1 <- qplot(df1$pH, geom="density")
p2 <- qplot(df1$alcohol, geom="density")
p3 <- qplot(df1$volatile.acidity, geom="density")
p4 <- qplot(df1$citric.acid, geom="density")
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2)
p1 <- qplot(df1$residual.sugar, geom="density")
p2 <- qplot(df1$chlorides, geom="density")
p3 <- qplot(df1$density, geom="density")
p4 <- qplot(df1$free.sulfur.dioxide, geom="density")
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- The kernel density plots seem to agree with the histograms and our conclusions | p1 <- ggplot(df1, aes(x="pH", y=pH)) + stat_boxplot(geom ='errorbar') + geom_boxplot()
p2 <- ggplot(df1, aes(x="alcohol", y=alcohol)) + stat_boxplot(geom ='errorbar') + geom_boxplot()
p3 <- ggplot(df1, aes(x="volatile.acidity", y=volatile.acidity)) + stat_boxplot(geom ='errorbar') + geom_boxplot()
p4 <- ggplot(df1, aes(x="citric.acid", y=citric.acid)) + stat_boxplot(geom ='errorbar') + geom_boxplot()
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- pH and acidity seem to have a lot of outliers.- The pH of an acidic substance is usally below 5 but for wines it seems to concentrate in the area between 2.7 and 4.0.- The alcohol content is between 8.4 to 15 but there seem to be many outliers. The Age of the wine also affects its alcohol content. This could explain the outliers but since we don't have an age variable there is no way to check it. 3. Correlation Matrix Checking the Correlation between variables (sourced from : http://www.sthda.com/english/wiki/ggplot2-quick-correlation-matrix-heatmap-r-software-and-data-visualization) | #data(attitude)
df2 <- df1
df2$wine_attribute <- NULL
library(ggplot2)
library(reshape2)
#(cor(df1) ) # correlation matrix
cormat <- cor(df2)
melted_cormat <- melt(cor(df2))
#ggplot(data = melted_cormat, aes(x=Var1, y=Var2, fill=value)) +
# geom_tile()
# Get lower triangle of the correlation matrix
get_lower_tri<-function(cormat){
cormat[upper.tri(cormat)] <- NA
return(cormat)
}
# Get upper triangle of the correlation matrix
get_upper_tri <- function(cormat){
cormat[lower.tri(cormat)]<- NA
return(cormat)
}
upper_tri <- get_upper_tri(cormat)
#upper_tri
# Melt the correlation matrix
library(reshape2)
melted_cormat <- melt(upper_tri, na.rm = TRUE)
# Heatmap
reorder_cormat <- function(cormat){
# Use correlation between variables as distance
dd <- as.dist((1-cormat)/2)
hc <- hclust(dd)
cormat <-cormat[hc$order, hc$order]
}
# Reorder the correlation matrix
cormat <- reorder_cormat(cormat)
upper_tri <- get_upper_tri(cormat)
# Melt the correlation matrix
melted_cormat <- melt(upper_tri, na.rm = TRUE)
# Create a ggheatmap
ggheatmap <- ggplot(melted_cormat, aes(Var2, Var1, fill = value))+
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1), space = "Lab",
name="Pearson\nCorrelation") +
theme_minimal()+ # minimal theme
theme(axis.text.x = element_text(angle = 45, vjust = 1,
size = 12, hjust = 1))+
coord_fixed()
# Print the heatmap
#print(ggheatmap)
ggheatmap +
geom_text(aes(Var2, Var1, label = round(value,2) ), color = "black", size = 3) +
theme(
axis.title.x = element_blank(),
axis.title.y = element_blank(),
panel.grid.major = element_blank(),
panel.border = element_blank(),
panel.background = element_blank(),
axis.ticks = element_blank(),
legend.justification = c(1, 0),
legend.position = c(0.6, 0.7),
legend.direction = "horizontal")+
guides(fill = guide_colorbar(barwidth = 7, barheight = 1,
title.position = "top", title.hjust = 0.5)) | Warning message:
"package 'reshape2' was built under R version 3.3.2"
Attaching package: 'reshape2'
The following object is masked from 'package:tidyr':
smiths
| Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
- The values in Red are positively correlated while those in Blue are negatively correlated. The density of the color determines the strength of correlation.- Quality has a negative correlation with volatile acidity, and total sulfur dioxide content. While it has a positive correlation with alcohol content and citric acid.- It can be seen that pH and fixed acidity have a strong negative correlation, - Residual sugar and sulphates have a very slight positive correlation- Free sulfur dioxide and total sulfur dioxide are strongly positively correlated ( as expected ). But the fixed acidity and volatile acidity are negatively correlated. Interesting fact that could be used for modeling.. | p1 <- ggplot(df1, aes(x= volatile.acidity, y= quality)) + geom_point() + geom_smooth(method=lm)
p2 <- ggplot(df1, aes(x= total.sulfur.dioxide, y= quality)) + geom_point() + geom_smooth(method=lm)
p3 <- ggplot(df1, aes(x= alcohol, y= quality)) + geom_point() + geom_smooth(method=lm)
p4 <- ggplot(df1, aes(x= citric.acid, y= quality)) + geom_point() + geom_smooth(method=lm)
#p5 <- ggplot(df1, aes(x= sulphates, y= quality)) + geom_point() + geom_smooth(method=lm)
grid.arrange(p1,p2,p3,p4, ncol=2, nrow=2) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
This confirms our analysis from the correlation matrix 4. Modeling We'll be using a 10-fold cross validationWe perform the 10 fold CV on the learning dataset and try to predict the valid dataset | # Train Test Split
m <- dim(df1)[1] # Select the rows of iris
val <- sample(1:m, size = round(m/3), replace = FALSE, prob = rep(1/m, m))
df1.learn <- df1[-val,] # train
df1.valid <- df1[val,] # test
# 10 Fold CV
library(caret)
# define training control
train_control <- trainControl(method="cv", number=10)
#trControl <- train_control | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
The linear model : trying to predict wine quality from the variables | head(df1,1)
model1 <- lm(as.numeric(quality)~ 0 + volatile.acidity + chlorides
+ log(free.sulfur.dioxide) + log(total.sulfur.dioxide) + density + pH + sulphates + alcohol, data = df1)
summary(model1)
df1.valid$prediction <- predict(model1,df1.valid)
df1.valid$prediction_lm <- round(df1.valid$prediction)
x <- confusionMatrix(df1.valid$prediction_lm, df1.valid$quality)
acc_lm <- x$overall[1]
print(c("accuracy of linear model is :", (acc_lm*100) ))
ggplot(df1.valid) + geom_point(aes(pH, quality), color = "red") + geom_point(aes(pH, prediction)) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
From the above graph we see that this is not what we intended. Therefore we move on to classification models Weighted KNNUsing multiple K, distance metrics and kernels | require(kknn)
model2 <- kknn( factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn, df1.valid)
x <- confusionMatrix(df1.valid$wine_attribute, model2$fit)
y <- (x$table)
y
acc_kknn1 <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of KKNN is :", round(acc_kknn1*100,3) ))
model3 <- train.kknn(factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn,trControl = train_control, kmax = 15, kernel = c("triangular", "epanechnikov", "biweight", "triweight", "cos", "inv", "gaussian", "rank", "optimal"), distance = 1)
summary(model3)
x <- confusionMatrix(predict(model3, df1.valid), df1.valid$wine_attribute)
y <- (x$table)
y
acc_kknn2 <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of KKNN is :", round(acc_kknn2*100,3) ))
model4 <- train.kknn(factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn,trControl = train_control, kmax = 15, kernel = c("triangular", "epanechnikov", "biweight", "triweight", "cos", "inv", "gaussian", "rank", "optimal"), distance = 5)
summary(model4)
x <- confusionMatrix(predict(model4, df1.valid), df1.valid$wine_attribute)
y <- (x$table)
y
acc_kknn3 <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of KKNN is :", round(acc_kknn3*100,3) )) |
Call:
train.kknn(formula = factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar + chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH + sulphates + alcohol, data = df1.learn, kmax = 15, distance = 5, kernel = c("triangular", "epanechnikov", "biweight", "triweight", "cos", "inv", "gaussian", "rank", "optimal"), trControl = train_control)
Type of response variable: nominal
Minimal misclassification: 0.206379
Best kernel: inv
Best k: 13
| Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
Weighted KKNN gave us decent results. Lets see if we can improve on it. Tree Models Random Forest | library(randomForest)
model5 <- randomForest(as.factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn,trControl = train_control, importance=TRUE, ntree=2000)
df1.valid$prediction <- predict(model5, df1.valid)
x <- confusionMatrix(df1.valid$prediction, df1.valid$wine_attribute)
y <- (x$table)
y
acc_rf <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of Random Forest is :", round(acc_rf*100,3) ))
importance(model5)
varImpPlot(model5) # importance of each variable | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
Ensembling Random Forest with the Conditional Inference Tree | library(party)
model5x <- cforest(as.factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn, controls=cforest_unbiased(ntree=2000, mtry=3))
df1.valid$pred_cforest <- predict(model5x, df1.valid, OOB=TRUE, type = "response")
x <- confusionMatrix(df1.valid$pred_cforest, df1.valid$wine_attribute)
y <- (x$table)
y
acc_cf <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of Conditional Forest is :", round(acc_cf*100,3) )) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
Decision Trees using Rpart | library(rattle)
library(rpart.plot)
library(RColorBrewer)
library(rpart)
rpart.grid <- expand.grid(.cp=0.2)
model6 <- train(as.factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn, method="rpart",trControl = train_control,tuneGrid=rpart.grid)
# How one of these trees look like
model6s <- rpart(as.factor(wine_attribute) ~ fixed.acidity + volatile.acidity + citric.acid + residual.sugar
+ chlorides + free.sulfur.dioxide + total.sulfur.dioxide + density + pH
+ sulphates + alcohol, df1.learn, method = "class")
fancyRpartPlot(model6s)
df1.valid$pred_dtree <- predict(model6, df1.valid)
x <- confusionMatrix(df1.valid$pred_dtree, df1.valid$wine_attribute)
y <- (x$table)
y
acc_dt <- (y[1,1]+y[2,2]) / (y[1,1]+y[1,2]+y[2,2]+y[2,1])
print(c("accuracy of Decision Tree classifier is :", round(acc_dt*100,3) )) | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
5. Modeling Results Table | Model_Name <- c("Linear Model", "Simple_KKNN","KKNN_dist1","KKNN_dist2", "RandomForest", "Conditional Forest", "Decision Tree")
Overall_Accuracy <- c(acc_lm*100, acc_kknn1*100, acc_kknn2*100, acc_kknn3*100, acc_rf*100, acc_cf*100, acc_dt*100)
final <- data.frame(Model_Name,Overall_Accuracy)
final$Overall_Accuracy <- round( final$Overall_Accuracy, 3)
final | _____no_output_____ | Apache-2.0 | Red Wine Quality Analysis.ipynb | RagsX137/Red-Wine-Quality-Analysis |
Optimal treatment for varied lengths of time horizons | cs = c(3.6,3.2)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
cln_nms = c("T","time","sigma","resistance","size")
read.table(file="../figures/draft/Fig7X-trjs_optimal.csv",header=FALSE,sep=",",col.names=cln_nms) %>%
as.data.frame -> df_optimal
print("Optimal")
df_optimal %>% tail(1) %>% .$size
sz = 1.5; fc = .5
x0 = .4; len_seg = 1.8
y0 = 1.9
p1 = ggplot() +
geom_hline(yintercept=1,size=.25,color="black",linetype="dashed")
idx = 1
for (T0 in rev(unique(df_optimal %>% filter(T<=240) %>% .$T))) {
df_optimal %>% filter(T==T0) %>% mutate(time=(max(time)-time)/30,resistance=100*resistance) -> df_optimal0
size0 = filter(df_optimal0,time==min(time))$size
df_optimal0$size = size0/df_optimal0$size
# print(df_optimal0 %>% arrange(time))
p1 = p1 +
geom_vline(xintercept=T0/30,size=.25,color="black",linetype="dashed") +
geom_path(data=df_optimal0,aes(x=time,y=size),color=clrs[1],size=sz) +
geom_path(data=df_optimal0,aes(x=time,y=size),color=clrs_plt[idx],size=fc*sz)
idx = idx + 1
}
p1 = p1 +
theme_bw(base_size=12,base_family='Times') +
labs(x="Time horizon (months)",y="Fold change in tumor size") +
scale_color_gradientn(limits=c(0,1),oob=squish,
colours=clrs_plt,
values=seq(0,1,length.out=6)) +
scale_x_continuous(expand=c(0,0),breaks = seq(0,18,1)) +
scale_y_continuous(expand=c(0,0)) +
coord_cartesian(ylim=c(0.7,1.5)) +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
plot.margin = unit(c(.5,.5,1,1.3),"lines"),
legend.background = element_rect(fill="white"))
p1
ggsave(plot=p1,width=cs[1],height=cs[2],filename="../figures/draft/FigSXa.pdf",useDingbats=FALSE)
max(df_optimal$T/30)
df_optimal %>% group_by(T) %>% filter(time==T) %>% ungroup %>% mutate(T = T/30) %>%
ggplot(aes(x=T,y=1e9*size)) +
geom_path() +
labs(x='Time horizon (months)', y=expression('Final tumour size (initial '*10^9*' cells)')) +
scale_x_continuous(expand=c(0,0),breaks = seq(0,48,12)) +
scale_y_continuous(expand=c(0,0),breaks = c(1e9,1e10,2e10)) +
coord_cartesian(xlim=c(0,max(df_optimal$T/30)+.4),ylim=c(5e8,2.2e10)) +
# scale_y_log10(breaks = c(1e8,1e9,1e10),
# labels = trans_format("log10", math_format(10^.x))) +
theme_bw(base_size=12,base_family='Times') +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
plot.margin = unit(c(.5,.5,1,1.3),"lines"),
legend.background = element_rect(fill="white")) -> p1
p1
ggsave(plot=p1,width=cs[1],height=cs[2],filename="../figures/draft/FigSXb.pdf",useDingbats=FALSE) | _____no_output_____ | MIT | scripts/.ipynb_checkpoints/E. Figures 6 and 7 [R]-checkpoint.ipynb | aakhmetz/AkhmKim2019Scripts |
Another figure for solution of the optimal control problem | cs = c(4.2,2.75)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
cln_nms = c("trajectory","time","sigma","resistance","size")
read.table(file="../figures/draft/Fig6-trjs_optimal-final.csv",header=TRUE,sep=",",col.names=cln_nms) %>%
as.data.frame %>% mutate(time=time/30,resistance=100*resistance) -> df
# clrs = brewer.pal(9,"Set1")
sz = 1.5; fc = 0.5
x0 = 1.4; len_seg = 1.8
tmx = 4
p2 = df %>% filter(trajectory!=0) %>%
ggplot(aes(x=tmx-time,y=resistance,group=factor(trajectory))) +
geom_path(data=filter(df,trajectory==0),color="black",size=sz*.25,linetype=5) +
geom_path(data=filter(df,trajectory==1),color="darkgrey",size=sz) +
geom_path(data=filter(df,trajectory==1),aes(color=sigma),lineend="round",size=sz*fc) +
geom_path(data=filter(df,trajectory==2),color=clrs[3],size=sz) +
geom_path(data=filter(df,trajectory==2),aes(color=sigma),lineend="round",size=sz*fc) +
geom_path(data=filter(df,trajectory==3),color=clrs[2],size=sz) +
geom_path(data=filter(df,trajectory==3),aes(color=sigma),lineend="round",size=sz*fc) +
# geom_path(aes(color=sigma),lineend="round",size=sz*fc) +
theme_bw(base_size=12,base_family='Times') +
labs(x="Time until the end of the treatment",y="Intratumoral resistance (%)") +
scale_color_gradientn(limits=c(0,1),oob=squish,
colours=clrs_plt,
values=seq(0,1,length.out=6)) +
scale_x_continuous(expand=c(0,0),breaks=c(0,tmx),labels=c("",0)) +
scale_y_continuous(expand=c(0,0)) +
coord_cartesian(ylim=c(0,100),xlim=c(0,tmx)) +
guides(color=guide_colourbar(title="Treatment\nintensity",title.position="top",title.vjust=2)) +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
legend.text=element_text(size=8.5),
legend.key.height = unit(.8, 'lines'),
legend.title=element_text(size=10),
legend.direction = "vertical",
axis.title.x = element_text(vjust=0),
legend.key = element_rect(size = 5),
plot.margin = unit(c(.5,.5,1,.5),"lines")
)
p2
ggsave(plot=p2,width=cs[1],height=cs[2],filename="../figures/draft/Fig4.pdf",useDingbats=FALSE) | _____no_output_____ | MIT | scripts/.ipynb_checkpoints/E. Figures 6 and 7 [R]-checkpoint.ipynb | aakhmetz/AkhmKim2019Scripts |
Accessing Items 1. Write a program to retrieve the keys/values of dictionary **Use the dictionary**dictionary2 = {0:3, 'x':5, 1:2} | dictionary2 = {0:3, 'x':5, 1:2}
dictionary2 | _____no_output_____ | MIT | Arvind/08 - Accessing Items.ipynb | Arvind-collab/Data-Science |
2. Write a program to get the value for 'Age' from the dictionary **Use the dictionary**dictionary3 = {'Weight': 67, 'BMI': 25, 'Age': 27, 'Profession': 'CA'} | dictionary3 = {'Weight': 67, 'BMI': 25, 'Age': 27, 'Profession': 'CA'}
dictionary3['Age'] | _____no_output_____ | MIT | Arvind/08 - Accessing Items.ipynb | Arvind-collab/Data-Science |
 Kindle eBook Recommendation System: Data PreparationAuthors: Daniel Burdeno --- Contents- Overview- Business Understanding - Data Understanding - Data Preparation - Imports and Functions - Meta Data - Review Data - CSV Files Overview > This project aims to build a two system approach to recommending Kindle eBook's to both existing reviewers and new users looking to find similar books. For existing reviewers a collaborative approach is taken by comparing similar reviewer profiles based on exisitng ratings. A content-based approach is taken in order to recommend books based on similar review text data and can be used by anyone. Business Understanding > Currently eBooks are outsold by print books at about a 4 to 1 ratio. In 2020 there was 191 million eBooks sold. While Amazon holds over 70% of the market in eBooks via their kindle platform there is a large untapped potential for increasing eBook sales and promoting the use of eReaders compared to print. By utilzing quality recommendation systems Amazon can boost the interest and useablity of eBooks thus improving upon this market. The kindle platform and eBooks in general are incredidly accesibile for anyone with a tablet, smartphone, computer, or eReader. These eBooks can be immediatley purchased from a multitude of platforms and are able to read within minutes of purchase, which is far superior to obtaining a print book. This notion of real time purchase and useablily plays greater into Amazon's one click purchase philsophy.> The kindle store is also full of cheap reads, with some eBooks even being free with certain subsripctions like prime and unlimited. A broad span of genres are available ranging from things like self-help books, cookbooks, and photography books to more traditional literature genres like Science Fiction & Fantasy and Romance novels. A final huge plus for the advocacy of eBooks is the ease in which readers can rate and reviews books they have either just read or already read. This can all be done via the same platform used to access and read the eBook (aka kindle). Ultimately this plays into the collection of more review and rating data wich in turn can attribute to better performing recommendations for each indiviudal user. A quality recommendation system can thus create a positive feedback loop that not only enhances itself but promotoes the increase in eBook sales across the board. Data Understanding > Data for this project was pulled from a compiled dataset of Amazon kindle store reviews and meta data in two seperate JSON files. The datasets can be found [here](https://nijianmo.github.io/amazon/index.html). I utlized the smaller dataset known as 5-core which contained data for products and reviewers with at least 5 entries. Data from the Kindle Store sets were used, both the 5-core review data and the full metadata file. Due to the large size of these datasets I downloaded them locally and saved to an external repository outside of github.> Data Instructions: Naviatged through the linked page requires an entry form of basic information (name, email) in order to begin downloads. Given the size of the two datasets allow several minutes for the downloads to occur. Once saved to your local drive (I placed the data one repository above the linked github repository) the JSON files can be loaded into jupyter notebooks via pandas (pd.read_json) using the compression='gz' and lines=True. Due to the size of the review text dataset be prepared for a large memory usage when loading it in. Data Preparation Imports and Functions > For data preparation and cleaning I primarily used built-in pandas methods and functions, utlizing numpy as well. Basic visualiztions were created with matplotlib. Warnings is imported to ignore the copy/slice warning when slicing a dataframe. I created a function that returns the third value in a list which was passed into the larger function used to clean the meta data file. See the function for detailed description of what is occuring. This function was updated as I explored the dataset and outputs. I also set a style for matplotlib for consistency across notebooks and visualations. | import pandas as pd
import numpy as np
import matplotlib as plt
import warnings
# Show plots in notebook
%matplotlib inline
warnings.filterwarnings('ignore')
# Set matplotlib style to match other notebook graphics
plt.style.use('fast')
# Function that takes a list and returns the third value, used in a .apply below to iterate through a dataframe column
def getthirdValue(aList):
return aList[2:3]
# Compiled meta_data cleaning for ease of use, takes in a dataframe and returns the 'cleaned' version
def meta_clean(data):
# Creating a new genre column based on the category column, third value in the list is the one we want
data['genre'] = data['category'].apply(getthirdValue)
# Change into single string and remove html code
data['genre'] = data['genre'].apply(lambda x: ','.join(map(str, x)))
data['genre'] = data['genre'].str.replace('amp;', '')
# Retrieve print length from the details columns dictionary and return as new column
print_length = [d.get('Print Length:') for d in data['details']]
data['print_length'] = print_length
# Returns only the print length minus any text
data['print_length'] = data['print_length'].str.extract('(\d+)', expand=False)
data['print_length']= data['print_length'].astype(float)
# Retrieve word wise feature from the details columns dictionary and return as new column
word_wise = [d.get('Word Wise:') for d in data['details']]
data['word_wise'] = word_wise
# Retrieve lending feature from the details columns dictionary and return as new column
lending = [d.get('Lending:') for d in data['details']]
data['lending'] = lending
# Transform word wise and lending columns into binary values using dictionary and .map
bool_dict = {'Enabled': 1, 'Not Enabled': 0}
data['word_wise'] = data['word_wise'].map(bool_dict)
data['lending'] = data['lending'].map(bool_dict)
# Clean brand column, removing unnecessary text, and rename to author as this is what it represents
data['brand'] = data['brand'].str.replace("Visit Amazon's", '')
data['brand'] = data['brand'].str.replace("Page", '')
data.rename(columns={'brand': 'author'}, inplace=True)
# Remove/replace unnecessary text in the title column, including html code
data['title'] = data['title'].str.replace("amp;", "", regex=True)
data['title'] = data['title'].str.replace("'", "'", regex=True)
data['title'] = data['title'].str.replace(" - Kindle edition", "")
data['title'] = data['title'].str.replace(" eBook", "")
# Dropping unnecessary/incomplete columns
data.drop(columns=['details', 'category', 'tech1', 'description', 'fit', 'tech2',
'feature', 'rank', 'also_view', 'main_cat',
'similar_item', 'date', 'price', 'imageURL',
'imageURLHighRes', 'also_buy'], inplace=True)
return data.head() | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
Load Data > As stated in the Data Understanding section, we have two seperate JSON files to load in. One containing individual user reviews and the other containing book meta data. The meta data will need to be heavily cleaned to extract relevant information for this project. These large initial data files were loaded in from a local folder external to the repository for the project due to their size and necessary compression. | # Meta data load-in, file stored as compressed JSON, each line is a JSON entry hence the lines=True arguement
path = 'C:\\Users\\danie\\Documents\\Flatiron\\Projects\\Capstone\\Rawdata\\meta_Kindle_Store.gz'
df_meta = pd.read_json(path, compression='gzip', lines=True)
# Review data load-in, file stored as compressed JSON, each line is a JSON entry hence the lines=True arguement
path = 'C:\\Users\\danie\\Documents\\Flatiron\\Projects\\Capstone\\Rawdata\\Kindle_Store_5.gz'
df_rev = pd.read_json(path, compression='gzip', lines=True) | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
Meta Data | df_meta.info()
df_meta.head() | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
> Taking a look at the .info() and .head() of the meta data shows a plethora of cleaning that needs to occur. There are a multitude of unusable columns with blank information including tech1, tech2, fit, description, rank, main_cat, price, and image columns. Within the category column (a list) and the details column (a dictionary) I need to pull out relevant information. Further exploration online shows that the brand column is actually the eBook author and will need to extracted in useful information as well.> Each entry for the category column seen below is a list which needs to be dealt with in order to extract the correct information. Categories also contained things that are not eBooks. I removed any category that does not donate eBook. It is also clear that the third value of the list describes what can be attritubted as a genre of eBook. I took the third value from this list in order to create a new genre column.> Each entry for the details column seen below is a dictionary. Taking a look at the first row shows me that I can pull out useful information from this dicitonary, including print_length and two kindle features. Word_wise designates if the book has built in dictionary support and lending designates if the eBook can be lent to other users. The product IDs ('asin') is already another column in the dataframe so will not be extracted. | # Taking a look at what is within the category columns, it contains lists
df_meta['category'].value_counts()
# Using a dual nested lambda function and .apply I can subset the dataframe to only contain categories that denote eBooks
df_meta = df_meta.loc[
lambda df: df.category.apply(
lambda l: 'Kindle eBooks' in l)]
# Pulling out the first row of the details dictionary to epxlore
details = list(df_meta['details'])
details[0]
# Running my compiled clean function on meta data, see above for descriptions
meta_clean(df_meta)
# Checking the clean I still have several unwanted entries within genre including a blank one
df_meta.genre.value_counts()
# Subsetting to remove the genres with less the 1000 entries
df_meta = df_meta[df_meta['genre'].map(df_meta['genre'].value_counts()) > 1000]
# Remove the blank genre entry
df_meta = df_meta.loc[df_meta['genre'] != ''] | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
> After running the clean function on my meta data I noticed that there was still blank entries for things like title and I should check for null values. However a lot of these were just blank entries not actually denoated as NaN so I had to parse through the dataframe and replace any blank entry with NaN in order to accurately find and address them. Print length NaN's were filled in using the mean value of each genre. The rest of the entries were dropped, I needed accurate title and author in order to make recommendations. | # Converting blank entries to NaN using regex expression and looking at nulls
df_meta = df_meta.replace(r'^\s*$', np.nan, regex=True)
df_meta.isna().sum()
# Dropping nulls and using groupby to sort by genre so I can fill in any print_length nulls based on mean genre value
df_meta['print_length'] = df_meta.groupby(['genre'], sort=False)['print_length'].apply(lambda x: x.fillna(x.mean()))
df_meta.dropna(inplace=True)
# Checking for any duplicate book entries, none were found. Asin value denotates a unique Amazon product identifier
df_meta.asin.value_counts()
# Creating a list of the product numbers in match with review dataset
asin_list = df_meta['asin'].tolist() | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
Review Data | df_rev.info()
df_rev.head()
# I thought style would contain genre but it did not, entries will be subsetted using the meta data, so ignore column
df_rev['style'].value_counts() | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
> Taking a look at the review data shows some cleaning that needs to occur as well, including dropping unneeded columns and exploring several others. The overall column denotes the rating that a user gave to the item (critical information). Verified is either True or False, and looking it up it designates if a reviewer was verified to not have recieved the book for free or written a review for monetary gain. I considered it important to only included verified reviews in order to not introduce positive bais into the system. Only a small set of the reviews did not contain any text which is critical to my content based system so they will be dropped from the data. > This data set was suppose to contain only products and reviewers that had 5 or more entries (5-core) however upon exploration I found this to not be true. I kept the larger dataset for review text and content based recommendations but I also subsetted the data to only contain reviewers that had made 5 or more reviews for my collaborative filtering model. This is due to the nature of collaborative filtering requiring reviewer profiles to be fleshed out and will not work well at all with only a few reviews. | # Matching reviewed products with products with meta data
df_rev = df_rev[df_rev['asin'].isin(asin_list)]
df_rev.verified.value_counts()
# Dropping any rows that were not verified
indexNames = df_rev[df_rev['verified'] == False].index
df_rev.drop(indexNames , inplace=True)
# Dropping unused columns
df_rev_use = df_rev.drop(columns=['reviewTime', 'verified',
'style', 'reviewerName',
'unixReviewTime', 'image', 'vote', 'summary'])
df_rev_use.isna().sum()
# Dropping entries without review text
df_rev_use.dropna(inplace=True)
# Dropping any possible duplicate reviews
df_rev_use.drop_duplicates(inplace=True)
# Checking if data was really 5-core
df_rev_use['reviewerID'].value_counts()
# Most reviewed books
df_rev_use['asin'].value_counts()
df_rev_use.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 1398682 entries, 2754 to 2222982
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 overall 1398682 non-null int64
1 reviewerID 1398682 non-null object
2 asin 1398682 non-null object
3 reviewText 1398682 non-null object
dtypes: int64(1), object(3)
memory usage: 53.4+ MB
| MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
CSV Files > For ease of use in further notebooks the cleaned and compiled dataframes were saved to individual csv files within the data folder. These files can be saved locally and were not pushed to github due to size constraints. The meta data files were saved and pushed to github inorder for heroku/streamlit to have access to them. | # Save cleaned review dataframe to csv for use in other notebook, this set includes reviewers with less than 5 reviews
df_rev_use.to_csv('Data/df_rev_all.csv')
# Subsetting review data to only include reviewers with 5 or more reviews
df_rev5 = df_rev_use[df_rev_use['reviewerID'].map(df_rev_use['reviewerID'].value_counts()) > 4]
df_rev5.info()
df_rev5['reviewerID'].value_counts()
# Save cleaned subset of review data
df_rev5.to_csv('Data/df_rev5.csv')
# Creating sets of books for each review set in order to match meta data
asin_set = set(df_rev['asin'].tolist())
asin_set5 = set(df_rev5['asin'].tolist())
print(len(asin_set))
print(len(asin_set5))
# Meta data for books from the larger review set
df_meta_all = df_meta.loc[df_meta['asin'].isin(asin_set)]
# Meta data for books from the smaller 5-core review set
df_meta5 = df_meta.loc[df_meta['asin'].isin(asin_set5)]
# Save dataframes as csv for use in other notebooks
df_meta_all.to_csv('Data/meta_all.csv')
df_meta5.to_csv('Data/meta5.csv') | _____no_output_____ | MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
> In order to conduct natural language processing and produce content based on review text I needed to aggregate review text based on individual books. I used the unique product number, 'asin', to groupby and then join review text for each book into a new data dataframe below. This dataframe will be used to produce a document term matrix for every book. | # Groupby using 'asin' and custom aggregate to join all review text
df_books_rev = df_rev_use.groupby(['asin'], as_index = False).agg({'reviewText': ' '.join})
df_books_rev.to_csv('Data/df_books_rev.csv')
df_books_rev.head()
df_books_rev.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 94211 entries, 0 to 94210
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 asin 94211 non-null object
1 reviewText 94211 non-null object
dtypes: object(2)
memory usage: 1.4+ MB
| MIT | DataPrepFinal.ipynb | danielburdeno/Kindle-eBook-Recommendations |
URL: http://bokeh.pydata.org/en/latest/docs/gallery/unemployment.htmlMost examples work across multiple plotting backends, this example is also available for:* [Matplotlib - US unemployment example](../matplotlib/us_unemployment.ipynb) | import pandas as pd
import holoviews as hv
from holoviews import opts
hv.extension('bokeh') | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/us_unemployment.ipynb | ppwadhwa/holoviews |
Defining data | from bokeh.sampledata.unemployment1948 import data
data = pd.melt(data.drop('Annual', 1), id_vars='Year', var_name='Month', value_name='Unemployment')
heatmap = hv.HeatMap(data, label="US Unemployment (1948 - 2013)") | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/us_unemployment.ipynb | ppwadhwa/holoviews |
Plot | colors = ["#75968f", "#a5bab7", "#c9d9d3", "#e2e2e2", "#dfccce", "#ddb7b1", "#cc7878", "#933b41", "#550b1d"]
heatmap.opts(
opts.HeatMap(width=900, height=400, xrotation=45, xaxis='top', labelled=[],
tools=['hover'], cmap=colors)) | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/us_unemployment.ipynb | ppwadhwa/holoviews |
TensorFlow 2 Complete Project Workflow in Amazon SageMaker Data Preprocessing -> Training -> Automatic Model Tuning -> Deployment 1. [Introduction](Introduction)2. [SageMaker Processing for dataset transformation](SageMakerProcessing)5. [SageMaker hosted training](SageMakerHostedTraining)6. [Automatic Model Tuning](AutomaticModelTuning)7. [SageMaker hosted endpoint](SageMakerHostedEndpoint)8. [Workflow Automation with SageMaker Pipelines](WorkflowAutomation) 1. [Pipeline Parameters](PipelineParameters) 2. [Processing Step](ProcessingStep) 3. [Training and Model Creation Steps](TrainingModelCreation) 4. [Batch Scoring Step](BatchScoringStep) 5. [Creating and executing the pipeline](CreatingExecutingPipeline)9. [ML Lineage Tracking](LineageOfPipelineArtifacts)10. [Extensions](Extensions) Introduction If you are using TensorFlow 2, you can use the Amazon SageMaker prebuilt TensorFlow 2 framework container with training scripts similar to those you would use outside SageMaker. This notebook presents such a workflow, including all key steps such as preprocessing data with SageMaker Processing, and model training and deployment with SageMaker hosted training and inference. Automatic Model Tuning in SageMaker is used to tune the model's hyperparameters. Working through these steps in a notebook is part of the prototyping process; however, a repeatable production workflow typically is run outside notebooks. To demonstrate automating the workflow, we'll use [Amazon SageMaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) for workflow orchestration. Purpose-built for machine learning (ML), SageMaker Pipelines helps you automate different steps of the ML workflow including data processing, model training, and batch prediction (scoring), and apply conditions such as approvals for model quality. It also includes a model registry and model lineage tracker. To enable you to run this notebook within a reasonable time (typically less than an hour), this notebook's use case is a straightforward regression task: predicting house prices based on the well-known Boston Housing dataset. This public dataset contains 13 features regarding housing stock of towns in the Boston area. Features include average number of rooms, accessibility to radial highways, adjacency to a major river, etc. To begin, we'll import some necessary packages and set up directories for training and test data. We'll also set up a SageMaker Session to perform various operations, and specify an Amazon S3 bucket to hold input data and output. The default bucket used here is created by SageMaker if it doesn't already exist, and named in accordance with the AWS account ID and AWS Region. | import boto3
import os
import sagemaker
import tensorflow as tf
sess = sagemaker.session.Session()
bucket = sess.default_bucket()
region = boto3.Session().region_name
data_dir = os.path.join(os.getcwd(), 'data')
os.makedirs(data_dir, exist_ok=True)
train_dir = os.path.join(os.getcwd(), 'data/train')
os.makedirs(train_dir, exist_ok=True)
test_dir = os.path.join(os.getcwd(), 'data/test')
os.makedirs(test_dir, exist_ok=True)
raw_dir = os.path.join(os.getcwd(), 'data/raw')
os.makedirs(raw_dir, exist_ok=True)
batch_dir = os.path.join(os.getcwd(), 'data/batch')
os.makedirs(batch_dir, exist_ok=True) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
SageMaker Processing for dataset transformation Next, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. An alternative to SageMaker Processing is [SageMaker Data Wrangler](https://aws.amazon.com/sagemaker/data-wrangler/), a visual data preparation tool integrated with the SageMaker Studio UI. To work with SageMaker Processing, first we'll load the Boston Housing dataset, save the raw feature data and upload it to Amazon S3 so it can be accessed by SageMaker Processing. We'll also save the labels for training and testing. | import numpy as np
from tensorflow.python.keras.datasets import boston_housing
from sklearn.preprocessing import StandardScaler
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
np.save(os.path.join(raw_dir, 'x_train.npy'), x_train)
np.save(os.path.join(raw_dir, 'x_test.npy'), x_test)
np.save(os.path.join(raw_dir, 'y_train.npy'), y_train)
np.save(os.path.join(raw_dir, 'y_test.npy'), y_test)
s3_prefix = 'tf-2-workflow'
rawdata_s3_prefix = '{}/data/raw'.format(s3_prefix)
raw_s3 = sess.upload_data(path='./data/raw/', key_prefix=rawdata_s3_prefix)
print(raw_s3) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Next, simply supply an ordinary Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn framework container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal API contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete. | %%writefile preprocessing.py
import glob
import numpy as np
import os
from sklearn.preprocessing import StandardScaler
if __name__=='__main__':
input_files = glob.glob('{}/*.npy'.format('/opt/ml/processing/input'))
print('\nINPUT FILE LIST: \n{}\n'.format(input_files))
scaler = StandardScaler()
for file in input_files:
raw = np.load(file)
# only transform feature columns
if 'y_' not in file:
transformed = scaler.fit_transform(raw)
if 'train' in file:
if 'y_' in file:
output_path = os.path.join('/opt/ml/processing/train', 'y_train.npy')
np.save(output_path, raw)
print('SAVED LABEL TRAINING DATA FILE\n')
else:
output_path = os.path.join('/opt/ml/processing/train', 'x_train.npy')
np.save(output_path, transformed)
print('SAVED TRANSFORMED TRAINING DATA FILE\n')
else:
if 'y_' in file:
output_path = os.path.join('/opt/ml/processing/test', 'y_test.npy')
np.save(output_path, raw)
print('SAVED LABEL TEST DATA FILE\n')
else:
output_path = os.path.join('/opt/ml/processing/test', 'x_test.npy')
np.save(output_path, transformed)
print('SAVED TRANSFORMED TEST DATA FILE\n') | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster for SageMaker Processing. | from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor1 = SKLearnProcessor(framework_version='0.23-1',
role=get_execution_role(),
instance_type='ml.m5.xlarge',
instance_count=2) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
We're now ready to run the Processing job. To enable distributing the data files equally among the instances, we specify the `ShardedByS3Key` distribution type in the `ProcessingInput` object. This ensures that if we have `n` instances, each instance will receive `1/n` files from the specified S3 bucket. It may take around 3 minutes for the following code cell to run, mainly to set up the cluster. At the end of the job, the cluster automatically will be torn down by SageMaker. | from sagemaker.processing import ProcessingInput, ProcessingOutput
from time import gmtime, strftime
processing_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime()))
output_destination = 's3://{}/{}/data'.format(bucket, s3_prefix)
sklearn_processor1.run(code='preprocessing.py',
job_name=processing_job_name,
inputs=[ProcessingInput(
source=raw_s3,
destination='/opt/ml/processing/input',
s3_data_distribution_type='ShardedByS3Key')],
outputs=[ProcessingOutput(output_name='train',
destination='{}/train'.format(output_destination),
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test',
destination='{}/test'.format(output_destination),
source='/opt/ml/processing/test')])
preprocessing_job_description = sklearn_processor1.jobs[-1].describe() | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
In the log output of the SageMaker Processing job above, you should be able to see logs in two different colors for the two different instances, and that each instance received different files. Without the `ShardedByS3Key` distribution type, each instance would have received a copy of **all** files. By spreading the data equally among `n` instances, you should receive a speedup by approximately a factor of `n` for most stateless data transformations. After saving the job results locally, we'll move on to training and inference code. | x_train_in_s3 = '{}/train/x_train.npy'.format(output_destination)
y_train_in_s3 = '{}/train/y_train.npy'.format(output_destination)
x_test_in_s3 = '{}/test/x_test.npy'.format(output_destination)
y_test_in_s3 = '{}/test/y_test.npy'.format(output_destination)
!aws s3 cp {x_train_in_s3} ./data/train/x_train.npy
!aws s3 cp {y_train_in_s3} ./data/train/y_train.npy
!aws s3 cp {x_test_in_s3} ./data/test/x_test.npy
!aws s3 cp {y_test_in_s3} ./data/test/y_test.npy | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
SageMaker hosted training Now that we've prepared a dataset, we can move on to SageMaker's model training functionality. With SageMaker hosted training the actual training itself occurs not on the notebook instance, but on a separate cluster of machines managed by SageMaker. Before starting hosted training, the data must be in S3, or an EFS or FSx for Lustre file system. We'll upload to S3 now, and confirm the upload was successful. | s3_prefix = 'tf-2-workflow'
traindata_s3_prefix = '{}/data/train'.format(s3_prefix)
testdata_s3_prefix = '{}/data/test'.format(s3_prefix)
train_s3 = sess.upload_data(path='./data/train/', key_prefix=traindata_s3_prefix)
test_s3 = sess.upload_data(path='./data/test/', key_prefix=testdata_s3_prefix)
inputs = {'train':train_s3, 'test': test_s3}
print(inputs) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
We're now ready to set up an Estimator object for hosted training. We simply call `fit` to start the actual hosted training. | from sagemaker.tensorflow import TensorFlow
train_instance_type = 'ml.c5.xlarge'
hyperparameters = {'epochs': 30, 'batch_size': 128, 'learning_rate': 0.01}
hosted_estimator = TensorFlow(
source_dir='tf-2-workflow-smpipelines',
entry_point='train.py',
instance_type=train_instance_type,
instance_count=1,
hyperparameters=hyperparameters,
role=sagemaker.get_execution_role(),
base_job_name='tf-2-workflow',
framework_version='2.3.1',
py_version='py37') | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
After starting the hosted training job with the `fit` method call below, you should observe the valication loss converge with each epoch. Can we do better? We'll look into a way to do so in the **Automatic Model Tuning** section below. In the meantime, the hosted training job should take about 3 minutes to complete. | hosted_estimator.fit(inputs) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
The training job produces a model saved in S3 that we can retrieve. This is an example of the modularity of SageMaker: having trained the model in SageMaker, you can now take the model out of SageMaker and run it anywhere else. Alternatively, you can deploy the model into a production-ready environment using SageMaker's hosted endpoints functionality, as shown in the **SageMaker hosted endpoint** section below.Retrieving the model from S3 is very easy: the hosted training estimator you created above stores a reference to the model's location in S3. You simply copy the model from S3 using the estimator's `model_data` property and unzip it to inspect the contents. | !aws s3 cp {hosted_estimator.model_data} ./model/model.tar.gz | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
The unzipped archive should include the assets required by TensorFlow Serving to load the model and serve it, including a .pb file: | !tar -xvzf ./model/model.tar.gz -C ./model | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Automatic Model Tuning So far we have simply run one Hosted Training job without any real attempt to tune hyperparameters to produce a better model. Selecting the right hyperparameter values to train your model can be difficult, and typically is very time consuming if done manually. The right combination of hyperparameters is dependent on your data and algorithm; some algorithms have many different hyperparameters that can be tweaked; some are very sensitive to the hyperparameter values selected; and most have a non-linear relationship between model fit and hyperparameter values. SageMaker Automatic Model Tuning helps automate the hyperparameter tuning process: it runs multiple training jobs with different hyperparameter combinations to find the set with the best model performance.We begin by specifying the hyperparameters we wish to tune, and the range of values over which to tune each one. We also must specify an objective metric to be optimized: in this use case, we'd like to minimize the validation loss. | from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {
'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type="Logarithmic"),
'epochs': IntegerParameter(10, 50),
'batch_size': IntegerParameter(64, 256),
}
metric_definitions = [{'Name': 'loss',
'Regex': ' loss: ([0-9\\.]+)'},
{'Name': 'val_loss',
'Regex': ' val_loss: ([0-9\\.]+)'}]
objective_metric_name = 'val_loss'
objective_type = 'Minimize' | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Next we specify a HyperparameterTuner object that takes the above definitions as parameters. Each tuning job must be given a budget: a maximum number of training jobs. A tuning job will complete after that many training jobs have been executed. We also can specify how much parallelism to employ, in this case five jobs, meaning that the tuning job will complete after three series of five jobs in parallel have completed. For the default Bayesian Optimization tuning strategy used here, the tuning search is informed by the results of previous groups of training jobs, so we don't run all of the jobs in parallel, but rather divide the jobs into groups of parallel jobs. There is a trade-off: using more parallel jobs will finish tuning sooner, but likely will sacrifice tuning search accuracy. Now we can launch a hyperparameter tuning job by calling the `fit` method of the HyperparameterTuner object. The tuning job may take around 10 minutes to finish. While you're waiting, the status of the tuning job, including metadata and results for invidual training jobs within the tuning job, can be checked in the SageMaker console in the **Hyperparameter tuning jobs** panel. | tuner = HyperparameterTuner(hosted_estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=15,
max_parallel_jobs=5,
objective_type=objective_type)
tuning_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime()))
tuner.fit(inputs, job_name=tuning_job_name)
tuner.wait() | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
After the tuning job is finished, we can use the `HyperparameterTuningJobAnalytics` object from the SageMaker Python SDK to list the top 5 tuning jobs with the best performance. Although the results vary from tuning job to tuning job, the best validation loss from the tuning job (under the FinalObjectiveValue column) likely will be substantially lower than the validation loss from the hosted training job above, where we did not perform any tuning other than manually increasing the number of epochs once. | tuner_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name)
tuner_metrics.dataframe().sort_values(['FinalObjectiveValue'], ascending=True).head(5) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
The total training time and training jobs status can be checked with the following lines of code. Because automatic early stopping is by default off, all the training jobs should be completed normally. For an example of a more in-depth analysis of a tuning job, see the SageMaker official sample [HPO_Analyze_TuningJob_Results.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) notebook. | total_time = tuner_metrics.dataframe()['TrainingElapsedTimeSeconds'].sum() / 3600
print("The total training time is {:.2f} hours".format(total_time))
tuner_metrics.dataframe()['TrainingJobStatus'].value_counts() | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
SageMaker hosted endpoint Assuming the best model from the tuning job is better than the model produced by the individual hosted training job above, we could now easily deploy that model to production. A convenient option is to use a SageMaker hosted endpoint, which serves real time predictions from the trained model (For asynchronous, offline predictions on large datasets, you can use either SageMaker Processing or SageMaker Batch Transform.). The endpoint will retrieve the TensorFlow SavedModel created during training and deploy it within a SageMaker TensorFlow Serving container. This all can be accomplished with one line of code. More specifically, by calling the `deploy` method of the HyperparameterTuner object we instantiated above, we can directly deploy the best model from the tuning job to a SageMaker hosted endpoint. | tuning_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge') | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
We can compare the predictions generated by this endpoint with the actual target values: | results = tuning_predictor.predict(x_test[:10])['predictions']
flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist]
print('predictions: \t{}'.format(np.array(flat_list)))
print('target values: \t{}'.format(y_test[:10].round(decimals=1))) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
To avoid billing charges from stray resources, you can delete the prediction endpoint to release its associated instance(s). | sess.delete_endpoint(tuning_predictor.endpoint_name) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Workflow Automation with SageMaker Pipelines In the previous parts of this notebook, we prototyped various steps of a TensorFlow project within the notebook itself, with some steps being run on external SageMaker resources (hosted training, model tuning, hosted endpoints). Notebooks are great for prototyping, but generally are not used in production-ready machine learning pipelines. A very simple pipeline in SageMaker includes processing the dataset to get it ready for training, performing the actual training, and then using the model to perform some form of inference such as batch predition (scoring). We'll use SageMaker Pipelines to automate these steps, keeping the pipeline simple for now: it easily can be extended into a far more complex pipeline. Pipeline parameters Before we begin to create the pipeline itself, we should think about how to parameterize it. For example, we may use different instance types for different purposes, such as CPU-based types for data processing and GPU-based or more powerful types for model training. These are all "knobs" of the pipeline that we can parameterize. Parameterizing enables custom pipeline executions and schedules without having to modify the pipeline definition. | from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
# raw input data
input_data = ParameterString(name="InputData", default_value=raw_s3)
# processing step parameters
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.m5.xlarge")
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=2)
# training step parameters
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.c5.2xlarge")
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
# batch inference step parameters
batch_instance_type = ParameterString(name="BatchInstanceType", default_value="ml.c5.xlarge")
batch_instance_count = ParameterInteger(name="BatchInstanceCount", default_value=1) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Processing Step The first step in the pipeline will preprocess the data to prepare it for training. We create a `SKLearnProcessor` object similar to the one above, but now parameterized so we can separately track and change the job configuration as needed, for example to increase the instance type size and count to accommodate a growing dataset. | from sagemaker.sklearn.processing import SKLearnProcessor
role = sagemaker.get_execution_role()
framework_version = "0.23-1"
sklearn_processor = SKLearnProcessor(
framework_version=framework_version,
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name="tf-2-workflow-process",
sagemaker_session=sess,
role=role,
)
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import ProcessingStep
step_process = ProcessingStep(
name="TF2Process",
processor=sklearn_processor,
inputs=[
ProcessingInput(source=input_data, destination="/opt/ml/processing/input", s3_data_distribution_type='ShardedByS3Key'),
],
outputs=[
ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
],
code="./preprocessing.py",
) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Training and Model Creation Steps The following code sets up a pipeline step for a training job. We start by specifying which SageMaker prebuilt TensorFlow 2 training container to use for the job. | from sagemaker.tensorflow import TensorFlow
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
from sagemaker.workflow.step_collections import RegisterModel
tensorflow_version = '2.3.1'
python_version = 'py37'
image_uri_train = sagemaker.image_uris.retrieve(
framework="tensorflow",
region=region,
version=tensorflow_version,
py_version=python_version,
instance_type=training_instance_type,
image_scope="training"
) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Next, we specify an `Estimator` object, and define a `TrainingStep` to insert the training job in the pipeline with inputs from the previous SageMaker Processing step. | import time
model_path = f"s3://{bucket}/TF2WorkflowTrain"
training_parameters = {'epochs': 44, 'batch_size': 128, 'learning_rate': 0.0125, 'for_pipeline': 'true'}
estimator = TensorFlow(
image_uri=image_uri_train,
source_dir='tf-2-workflow-smpipelines',
entry_point='train.py',
instance_type=training_instance_type,
instance_count=training_instance_count,
role=role,
base_job_name="tf-2-workflow-train",
output_path=model_path,
hyperparameters=training_parameters
)
step_train = TrainingStep(
name="TF2WorkflowTrain",
estimator=estimator,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri
),
"test": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri
)
},
) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
As another step, we create a SageMaker `Model` object to wrap the model artifact, and associate it with a separate SageMaker prebuilt TensorFlow Serving inference container to potentially use later. | from sagemaker.model import Model
from sagemaker.inputs import CreateModelInput
from sagemaker.workflow.steps import CreateModelStep
image_uri_inference = sagemaker.image_uris.retrieve(
framework="tensorflow",
region=region,
version=tensorflow_version,
py_version=python_version,
instance_type=batch_instance_type,
image_scope="inference"
)
model = Model(
image_uri=image_uri_inference,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sess,
role=role,
)
inputs_model = CreateModelInput(
instance_type=batch_instance_type
)
step_create_model = CreateModelStep(
name="TF2WorkflowCreateModel",
model=model,
inputs=inputs_model,
) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Batch Scoring Step The final step in this pipeline is offline, batch scoring (inference/prediction). The inputs to this step will be the model we trained earlier, and the test data. A simple, ordinary Python script is all we need to do the actual batch inference. | %%writefile batch-score.py
import os
import subprocess
import sys
import numpy as np
import pathlib
import tarfile
def install(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
if __name__ == "__main__":
install('tensorflow==2.3.1')
model_path = f"/opt/ml/processing/model/model.tar.gz"
with tarfile.open(model_path, 'r:gz') as tar:
tar.extractall('./model')
import tensorflow as tf
model = tf.keras.models.load_model('./model/1')
test_path = "/opt/ml/processing/test/"
x_test = np.load(os.path.join(test_path, 'x_test.npy'))
y_test = np.load(os.path.join(test_path, 'y_test.npy'))
scores = model.evaluate(x_test, y_test, verbose=2)
print("\nTest MSE :", scores)
output_dir = "/opt/ml/processing/batch"
pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True)
evaluation_path = f"{output_dir}/score-report.txt"
with open(evaluation_path, 'w') as writer:
writer.write(f"Test MSE : {scores}") | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
In regard to the SageMaker features we could use to perform batch scoring, we have several choices, including SageMaker Processing and SageMaker Batch Transform. We'll use SageMaker Processing here. | batch_scorer = SKLearnProcessor(
framework_version=framework_version,
instance_type=batch_instance_type,
instance_count=batch_instance_count,
base_job_name="tf-2-workflow-batch",
sagemaker_session=sess,
role=role )
step_batch = ProcessingStep(
name="TF2WorkflowBatchScoring",
processor=batch_scorer,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model"
),
ProcessingInput(
source=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri,
destination="/opt/ml/processing/test"
)
],
outputs=[
ProcessingOutput(output_name="batch", source="/opt/ml/processing/batch"),
],
code="./batch-score.py" ) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Creating and executing the pipeline With all of the pipeline steps now defined, we can define the pipeline itself as a `Pipeline` object comprising a series of those steps. Parallel and conditional steps also are possible. | from sagemaker.workflow.pipeline import Pipeline
pipeline_name = f"TF2Workflow"
pipeline = Pipeline(
name=pipeline_name,
parameters=[input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
training_instance_count,
batch_instance_type,
batch_instance_count],
steps=[step_process,
step_train,
step_create_model,
step_batch
],
sagemaker_session=sess
) | _____no_output_____ | Apache-2.0 | notebooks/tf-2-workflow-smpipelines.ipynb | yegortokmakov/amazon-sagemaker-workshop |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.