markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Dataset saving | # Create a list of all input images
if not len(ip_files) or not len(upsample_files):
for lidar_file in processed_lidar_files:
if 'front' in lidar_file:
out_ip = get_image_files(lidar_file, 'ip')
out_upsample = get_image_files(lidar_file, 'upsample')
ip_files.append(list(out_ip)[:-1])
upsample_files.append(list(out_upsample)[:-1])
save_list_to_pfile(ip_files, root_path + '../dataset/ip_inputs.pkl')
save_list_to_pfile(upsample_files, root_path + '../dataset/upsample_inputs.pkl') | _____no_output_____ | MIT | data_processing/lidar_data_processing.ipynb | abhitoronto/KITTI_ROAD_SEGMENTATION |
Ground Truth Conditioning | def create_binary_gt(lidar_files, color=(255,0,255)):
gt_files = []
for lidar_file in lidar_files:
if 'front' in lidar_file:
assert Path(lidar_file).is_file(), f'{lidar_file} is not a file'
# Get Label file
label_file = extract_semantic_file_name_from_any_file_name(lidar_file, root_path)
gt_file = extract_sensor_file_name(lidar_file, root_path, 'image-gt', 'png')
# Skip if file exists
if Path(gt_file).is_file():
gt_files.append(gt_file)
continue
# Mask for color
label_img = cv2.imread(str(label_file), 1)
B = label_img[:,:,0] == color[0]
G = label_img[:,:,1] == color[1]
R = label_img[:,:,2] == color[2]
road_area = B & G & R
gt_img = road_area.astype(dtype=np.uint8) * 255
# Create GT folder and save file
if not Path(gt_file).is_file():
gt_cam_dir = get_prev_directory(gt_file)
gt_dir = get_prev_directory(gt_cam_dir)
create_unique_dir(gt_dir)
create_unique_dir(gt_cam_dir)
cv2.imwrite(gt_file, gt_img)
# Append file to list
gt_files.append(gt_file)
return gt_files
gt_files = create_binary_gt(processed_lidar_files)
save_list_to_pfile(gt_files, root_path + '../dataset/outputs.pkl')
len(gt_files)
means_img = np.array([0.0, 0.0, 0.0])
means_lidar = np.array([0.0, 0.0, 0.0])
meanss_img = np.array([0.0, 0.0, 0.0])
meanss_lidar = np.array([0.0, 0.0, 0.0])
idx = 0
for ip in ip_files:
if Path(ip[0]).is_file():
idx = idx+1
img = cv2.imread(str(ip[0]), cv2.COLOR_BGR2RGB)
img_x = cv2.imread(str(ip[1]), 1)
img_y = cv2.imread(str(ip[2]), 1)
img_z = cv2.imread(str(ip[3]), 1)
img_lidar = cv2.merge((img_x, img_y, img_z))
means_img += np.array([(img[0]).mean(),(img[1]).mean(), (img[2]).mean()])
means_lidar += np.array([(img_lidar[0]).mean(),(img_lidar[1]).mean(), (img_lidar[2]).mean()])
img_s = img.astype(np.uint32)**2
img_lidar_s = img_lidar.astype(np.uint32)**2
meanss_img += np.array([(img_s[0]).mean(),(img_s[1]).mean(), (img_s[2]).mean()])
meanss_lidar += np.array([(img_lidar_s[0]).mean(),(img_lidar_s[1]).mean(), (img_lidar_s[2]).mean()])
print(f'Done with img{idx}')
std_img = np.sqrt(meanss_img/idx - (means_img/idx)**2)
std_lidar = np.sqrt(meanss_lidar/idx - (means_lidar/idx)**2)
mean_img = means_img/idx
mean_lidar = means_lidar/idx
mean_img/255, std_img/255
mean_lidar/255, std_lidar/255 | _____no_output_____ | MIT | data_processing/lidar_data_processing.ipynb | abhitoronto/KITTI_ROAD_SEGMENTATION |
Beyond SIR modeling [](https://colab.research.google.com/github/collectif-codata/pyepidemics/blob/master/docs/tutorials/beyond-sir.ipynb) NoteIn this tutorial we will see how we can build differential equations models and go from simple SIR modeling to add more states and model public policies such as lockdown Developer import | %matplotlib inline
%load_ext autoreload
%autoreload 2
# Developer import
import sys
sys.path.append("../../") | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
On Google ColabUncomment the following line to install the library locally | # !pip install pyepidemics | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Verify the library is correctly installed | import pyepidemics
from pyepidemics.models import SIR,SEIR,SEIDR,SEIHDR | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Introduction TipThis tutorial is largely inspired from this great article [Infectious Disease Modelling: Beyond the Basic SIR Model](https://towardsdatascience.com/infectious-disease-modelling-beyond-the-basic-sir-model-216369c584c4) by Henri Froese, from which actually a huge part of the code from this library is inspired. Simple models by complexity SIR model Differential equations models represents transitions between population states. SIR is one the most simple model used for many epidemics, in which you suppose three population states : - ``S`` - Susceptible state, all people that can still be infected- ``I`` - Infected state, contaminated people that will recover- ``R`` - Removed state, people that are removed from the models, ie that cannot be infected again which is either you recover and you are immune, or unfortunately you are deceasedBetween each state you consider three information : - The **population** considered- The temporal **rate** (ie 1/duration) representing the number of persons transitioning per day- The **probability** to go to the next stateYou can also notice the **epidemiological parameters** such as $\beta$ or $\gamma$  | N = 1000
beta = 1
gamma = 1/4
# Define model
sir = SIR(N,beta,gamma)
# Solve the equations
states = sir.solve(init_state = 1)
states.show(plotly = False) | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
You can visualize the transitions by compartments, with the command ``.network.show()`` (which is not super useful for SIR models, but can be interesting to check more complex models) | sir.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIR model  | # Population
N = 1e6
beta = 1
delta = 1/3
gamma = 1/4
# Define the model
seir = SEIR(N,beta,delta,gamma)
# Solve the equations
states = seir.solve(init_state = 1)
states.show(plotly = False)
seir.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIDR model  | # Population
N = 1e6
gamma = 1/4
beta = 3/4
delta = 1/3
alpha = 0.2 # probability to die
rho = 1/9 # 9 ndays before death
# Define the model
seidr = SEIDR(N,beta,delta,gamma,rho,alpha)
# Solve the equations
states = seidr.solve(init_state = 1)
states.show(plotly = False)
seidr.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
SEIHDR model | # Population
N = 1e6
beta = 1/4 * 5 # R0 = 2.5
delta = 1/5
gamma = 1/4
theta = 1/5 # ndays before complication
kappa = 1/10 # ndays before symptoms disappear
phi = 0.5 # probability of complications
alpha = 0.2 # probability to die
rho = 1/9 # 9 ndays before death
# Define the model
seihdr = SEIHDR(N,beta,delta,gamma,rho,alpha,theta,phi,kappa)
# Solve the equations
states = seihdr.solve(init_state = 1,n_days = 100)
states.show(plotly = False)
seihdr.network.show() | [INFO] Displaying only the largest graph component, graphs may be repeated for each category
| MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Towards COVID19 modeling To model COVID19 epidemics, we can use a more complex compartmental model to account for different levels of symptoms and patients going to ICU. You can read more about it in this [tutorial](https://collectif-codata.github.io/pyepidemics/tutorials/covid/) Modeling policies Simulating parameters change over time To model any policy with macro-epidemiological models we can play with the parameters or the equations. One simple way to model the implementation of a public policy is to make one parameter vary over time when it's implemented. For example to model a lockdown (or any equivalent policy such as social distancing, masks, ...) we can make the parameter ``beta`` vary. Piecewise evolution One option is to take a piecewise function that can be as simple as shown here | date_lockdown = 53
def beta(t):
if t < date_lockdown:
return 3.3/4
else:
return 1/4
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,100)
y = np.vectorize(beta)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
For convenience we can use the helper function defined in pyepidemics | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
]
fn = make_dynamic_fn(policies,sigmoid = False)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
The result is the same, but we can use this function for more complex policies | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = False)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Gradual transitions with sigmoidBehaviors don't change over a day, to model this phenomenon we could prefer gradual transitions from one value to the next using sigmoid functions. We can use the previous function for that : | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = True)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
We can even specify the transitions durations as followed | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53),
(2/4,80),
]
fn = make_dynamic_fn(policies,sigmoid = True,transition = 8)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Or even for each transition | from pyepidemics.policies.utils import make_dynamic_fn
policies = [
3.3/4,
(1/4,53,15),
(2/4,80,5),
]
fn = make_dynamic_fn(policies,sigmoid = True)
# Visualize policies
x = np.linspace(0,100)
y = np.vectorize(fn)(x)
plt.figure(figsize = (15,4))
plt.plot(x,y); | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Lockdown Instead of passing a constant as beta in the previous SEIHDR model, we can pass any function depending over time | lockdown_date = 53
policies = [
3.3/4,
(1/4,lockdown_date),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
# Population
N = 1e6
delta = 1/5
gamma = 1/4
theta = 1/5 # ndays before complication
kappa = 1/10 # ndays before symptoms disappear
phi = 0.5 # probability of complications
alpha = 0.2 # probability to die
rho = 1/9 # 9 ndays before death
# Define the model
seihdr = SEIHDR(N,beta,delta,gamma,rho,alpha,theta,phi,kappa)
# Solve the equations
states = seihdr.solve(init_state = 1,n_days = 100)
# Visualize the epidemic curves
states.show(plotly = False,show = False)
plt.axvline(lockdown_date,c = "black")
plt.show()
for Rlockdown in [0.1,0.5,1,2,3.3]:
lockdown_date = 53
policies = [
3.3/4,
(Rlockdown/4,lockdown_date),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
# Define the model
seihdr = SEIHDR(N,beta,delta,gamma,rho,alpha,theta,phi,kappa)
states = seihdr.solve(init_state = 1,n_days = 100)
# Visualize the epidemic curves
states.show(plotly = False,show = False)
plt.axvline(lockdown_date,c = "black")
plt.title(f"Lockdown with R={Rlockdown}")
plt.show() | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
Lockdown exit Now that you've understood how to change a parameter over time, it's easy to simulate a lockdown exit by adding a new parameter. | for R_post_lockdown in [0.1,0.5,1,2,3.3]:
lockdown_date = 53
duration_lockdown = 60
policies = [
3.3/4,
(0.6/4,lockdown_date),
(R_post_lockdown/4,lockdown_date+duration_lockdown),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
# Define the model
seihdr = SEIHDR(N,beta,delta,gamma,rho,alpha,theta,phi,kappa)
states = seihdr.solve(init_state = 1,n_days = 200)
# Visualize the epidemic curves
states.show(plotly = False,show = False)
plt.axvline(lockdown_date,c = "black")
plt.axvline(lockdown_date+duration_lockdown,c = "black")
plt.title(f"Lockdown of {duration_lockdown} days with R_post_lockdown={R_post_lockdown}")
plt.show()
for duration_lockdown in [20,40,60,90]:
lockdown_date = 53
R_post_lockdown = 2
policies = [
3.3/4,
(0.6/4,lockdown_date),
(R_post_lockdown/4,lockdown_date+duration_lockdown),
]
fn = make_dynamic_fn(policies,sigmoid = True)
beta = lambda y,t : fn(t)
# Define the model
seihdr = SEIHDR(N,beta,delta,gamma,rho,alpha,theta,phi,kappa)
states = seihdr.solve(init_state = 1,n_days = 200)
# Visualize the epidemic curves
states.show(plotly = False,show = False)
plt.axvline(lockdown_date,c = "black")
plt.axvline(lockdown_date+duration_lockdown,c = "black")
plt.title(f"Lockdown of {duration_lockdown} days with R_post_lockdown={R_post_lockdown}")
plt.show() | _____no_output_____ | MIT | docs/tutorials/beyond-sir.ipynb | collectif-codata/pyepidemics |
AMUSE: Community codes | import numpy
numpy.random.seed(11)
from amuse.lab import *
from amuse.support.console import set_printing_strategy
set_printing_strategy(
"custom",
preferred_units=[units.MSun, units.parsec, units.Myr, units.kms],
precision=6, prefix="", separator=" [", suffix="]",
)
converter = nbody_system.nbody_to_si(1 | units.parsec, 1000 | units.MSun) | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Amuse contains many community codes, which can be found in amuse.community.These are often codes that have been in use as standalone codes for a long time (e.g. Gadget2), but some are unique to AMUSE (e.g. ph4, a 4th order parallel Hermite N-body integrator with GPU support).Each community code must be instantiated to start it, after which parameters can be set and particles added.The code can then be instructed to evolve the particles to a specific time. Once it reaches this time, the code can be called again, or it can be stopped, removing it from memory and stopping the running process(es). | test_sphere = new_plummer_model(1000, converter)
test_sphere.mass = new_salpeter_mass_distribution(1000, mass_min=0.3 | units.MSun)
def new_gravity(particles):
gravity = ph4(converter, number_of_workers=1)
gravity.parameters.epsilon_squared = (0.01 | units.parsec)**2
gravity.particles.add_particles(particles)
gravity_to_model = gravity.particles.new_channel_to(particles)
return gravity, gravity_to_model
gravity, gravity_to_model = new_gravity(test_sphere)
print(test_sphere.center_of_mass())
print(gravity.particles.center_of_mass())
gravity.evolve_model(0.1 | units.Myr)
print(gravity.particles.center_of_mass())
print(test_sphere.center_of_mass())
gravity.stop() | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Note that the original particles (`test_sphere`) were not modified, while those maintained by the code were (for performance reasons). Also, small numerical errors can arise at this point, the magnitude of which depends on the chosen converter units.To synchronise the particle sets, AMUSE uses "channels". These can copy the required data when needed, e.g. when synchronising changes in particle properties to other codes. | gravity, gravity_to_model = new_gravity(test_sphere)
print(gravity.particles.center_of_mass())
gravity.evolve_model(0.1 | units.Myr)
gravity_to_model.copy()
print(gravity.particles.center_of_mass())
print(test_sphere.center_of_mass())
gravity.stop() | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Combining codes: gravity and stellar evolution In a simulation of a star cluster, we may want to combine several codes to address different parts of the problem:- an N-body code for gravity,- a stellar evolution codeIn the simplest case, these interact only via the stellar mass, which is changed over time by the stellar evolution code and then updated in the gravity code. | def new_evolution(particles):
evolution = SSE()
evolution.parameters.metallicity = 0.01
evolution.particles.add_particles(particles)
evolution_to_model = evolution.particles.new_channel_to(particles)
return evolution, evolution_to_model
evolution, evolution_to_model = new_evolution(test_sphere)
gravity, gravity_to_model = new_gravity(test_sphere)
model_to_gravity = test_sphere.new_channel_to(gravity.particles)
time = gravity.model_time
end_time = 1 | units.Myr
while time < end_time:
timestep = evolution.particles.time_step.min()
gravity.evolve_model(time+timestep/2)
evolution.evolve_model(time+timestep)
evolution_to_model.copy()
model_to_gravity.copy()
gravity.evolve_model(time+timestep)
time += timestep
print("Now at time %s." % gravity.model_time, end=" ")
print("The most massive star is now %s" % test_sphere.mass.max())
evolution.stop()
gravity.stop() | _____no_output_____ | MIT | Amuse community codes.ipynb | rieder/exeter-amuse-tutorial |
Data Science Academy - Python Fundamentos - Capítulo 4 Download: http://github.com/dsacademybr | # Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) | Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6
| MIT | Data Science Academy/Cap04/Notebooks/DSA-Python-Cap04-10-Enumerate.ipynb | srgbastos/Artificial-Intelligence |
Enumerate | # Criando uma lista
seq = ['a','b','c']
enumerate(seq)
list(enumerate(seq))
# Imprimindo os valores de uma lista com a função enumerate() e seus respectivos índices
for indice, valor in enumerate(seq):
print (indice, valor)
for indice, valor in enumerate(seq):
if indice >= 2:
break
else:
print (valor)
lista = ['Marketing', 'Tecnologia', 'Business']
for i, item in enumerate(lista):
print(i, item)
for i, item in enumerate('Isso é uma string'):
print(i, item)
for i, item in enumerate(range(10)):
print(i, item) | 0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
| MIT | Data Science Academy/Cap04/Notebooks/DSA-Python-Cap04-10-Enumerate.ipynb | srgbastos/Artificial-Intelligence |
Engineer features and convert time series data to images Imports & Settings To install `talib` with Python 3.7 follow [these](https://medium.com/@joelzhang/install-ta-lib-in-python-3-7-51219acacafb) instructions. | import warnings
warnings.filterwarnings('ignore')
from talib import (RSI, BBANDS, MACD,
NATR, WILLR, WMA,
EMA, SMA, CCI, CMO,
MACD, PPO, ROC,
ADOSC, ADX, MOM)
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.regression.rolling import RollingOLS
import statsmodels.api as sm
import pandas_datareader.data as web
import pandas as pd
import numpy as np
from pathlib import Path
%matplotlib inline
DATA_STORE = '../data/assets.h5'
MONTH = 21
YEAR = 12 * MONTH
START = '2000-01-01'
END = '2017-12-31'
sns.set_style('whitegrid')
idx = pd.IndexSlice
T = [1, 5, 10, 21, 42, 63]
results_path = Path('results', 'cnn_for_trading')
if not results_path.exists():
results_path.mkdir(parents=True) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Loading Quandl Wiki Stock Prices & Meta Data | adj_ohlcv = ['adj_open', 'adj_close', 'adj_low', 'adj_high', 'adj_volume']
with pd.HDFStore(DATA_STORE) as store:
prices = (store['quandl/wiki/prices']
.loc[idx[START:END, :], adj_ohlcv]
.rename(columns=lambda x: x.replace('adj_', ''))
.swaplevel()
.sort_index()
.dropna())
metadata = (store['us_equities/stocks'].loc[:, ['marketcap', 'sector']])
ohlcv = prices.columns.tolist()
prices.volume /= 1e3
prices.index.names = ['symbol', 'date']
metadata.index.name = 'symbol' | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rolling universe: pick 500 most-traded stocks | dollar_vol = prices.close.mul(prices.volume).unstack('symbol').sort_index()
years = sorted(np.unique([d.year for d in prices.index.get_level_values('date').unique()]))
train_window = 5 # years
universe_size = 500
universe = []
for i, year in enumerate(years[5:], 5):
start = str(years[i-5])
end = str(years[i])
most_traded = dollar_vol.loc[start:end, :].dropna(thresh=1000, axis=1).median().nlargest(universe_size).index
universe.append(prices.loc[idx[most_traded, start:end], :])
universe = pd.concat(universe)
universe = universe.loc[~universe.index.duplicated()]
universe.info(null_counts=True)
universe.groupby('symbol').size().describe()
universe.to_hdf('data.h5', 'universe') | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Generate Technical Indicators Factors | T = list(range(6, 21)) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Relative Strength Index | for t in T:
universe[f'{t:02}_RSI'] = universe.groupby(level='symbol').close.apply(RSI, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Williams %R | for t in T:
universe[f'{t:02}_WILLR'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: WILLR(x.high, x.low, x.close, timeperiod=t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Bollinger Bands | def compute_bb(close, timeperiod):
high, mid, low = BBANDS(close, timeperiod=timeperiod)
return pd.DataFrame({f'{timeperiod:02}_BBH': high, f'{timeperiod:02}_BBL': low}, index=close.index)
for t in T:
bbh, bbl = f'{t:02}_BBH', f'{t:02}_BBL'
universe = (universe.join(
universe.groupby(level='symbol').close.apply(compute_bb,
timeperiod=t)))
universe[bbh] = universe[bbh].sub(universe.close).div(universe[bbh]).apply(np.log1p)
universe[bbl] = universe.close.sub(universe[bbl]).div(universe.close).apply(np.log1p) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Normalized Average True Range | for t in T:
universe[f'{t:02}_NATR'] = universe.groupby(level='symbol',
group_keys=False).apply(lambda x:
NATR(x.high, x.low, x.close, timeperiod=t)) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Percentage Price Oscillator | for t in T:
universe[f'{t:02}_PPO'] = universe.groupby(level='symbol').close.apply(PPO, fastperiod=t, matype=1) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Moving Average Convergence/Divergence | def compute_macd(close, signalperiod):
macd = MACD(close, signalperiod=signalperiod)[0]
return (macd - np.mean(macd))/np.std(macd)
for t in T:
universe[f'{t:02}_MACD'] = (universe
.groupby('symbol', group_keys=False)
.close
.apply(compute_macd, signalperiod=t)) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Momentum | for t in T:
universe[f'{t:02}_MOM'] = universe.groupby(level='symbol').close.apply(MOM, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Weighted Moving Average | for t in T:
universe[f'{t:02}_WMA'] = universe.groupby(level='symbol').close.apply(WMA, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Exponential Moving Average | for t in T:
universe[f'{t:02}_EMA'] = universe.groupby(level='symbol').close.apply(EMA, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Commodity Channel Index | for t in T:
universe[f'{t:02}_CCI'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: CCI(x.high, x.low, x.close, timeperiod=t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Chande Momentum Oscillator | for t in T:
universe[f'{t:02}_CMO'] = universe.groupby(level='symbol').close.apply(CMO, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rate of Change Rate of change is a technical indicator that illustrates the speed of price change over a period of time. | for t in T:
universe[f'{t:02}_ROC'] = universe.groupby(level='symbol').close.apply(ROC, timeperiod=t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Chaikin A/D Oscillator | for t in T:
universe[f'{t:02}_ADOSC'] = (universe.groupby(level='symbol', group_keys=False)
.apply(lambda x: ADOSC(x.high, x.low, x.close, x.volume, fastperiod=t-3, slowperiod=4+t))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Average Directional Movement Index | for t in T:
universe[f'{t:02}_ADX'] = universe.groupby(level='symbol',
group_keys=False).apply(lambda x:
ADX(x.high, x.low, x.close, timeperiod=t))
universe.drop(ohlcv, axis=1).to_hdf('data.h5', 'features') | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Historical Returns Historical Returns | by_sym = universe.groupby(level='symbol').close
for t in [1,5]:
universe[f'r{t:02}'] = by_sym.pct_change(t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Remove outliers | universe[[f'r{t:02}' for t in [1, 5]]].describe()
outliers = universe[universe.r01>1].index.get_level_values('symbol').unique()
len(outliers)
universe = universe.drop(outliers, level='symbol') | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Historical return quantiles | for t in [1, 5]:
universe[f'r{t:02}dec'] = (universe[f'r{t:02}'].groupby(level='date')
.apply(lambda x: pd.qcut(x, q=10, labels=False, duplicates='drop'))) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Rolling Factor Betas | factor_data = (web.DataReader('F-F_Research_Data_5_Factors_2x3_daily', 'famafrench',
start=START)[0].rename(columns={'Mkt-RF': 'Market'}))
factor_data.index.names = ['date']
factor_data.info()
windows = list(range(15, 90, 5))
len(windows)
t = 1
ret = f'r{t:02}'
factors = ['Market', 'SMB', 'HML', 'RMW', 'CMA']
windows = list(range(15, 90, 5))
for window in windows:
print(window)
betas = []
for symbol, data in universe.groupby(level='symbol'):
model_data = data[[ret]].merge(factor_data, on='date').dropna()
model_data[ret] -= model_data.RF
rolling_ols = RollingOLS(endog=model_data[ret],
exog=sm.add_constant(model_data[factors]), window=window)
factor_model = rolling_ols.fit(params_only=True).params.drop('const', axis=1)
result = factor_model.assign(symbol=symbol).set_index('symbol', append=True)
betas.append(result)
betas = pd.concat(betas).rename(columns=lambda x: f'{window:02}_{x}')
universe = universe.join(betas) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Compute Forward Returns | for t in [1, 5]:
universe[f'r{t:02}_fwd'] = universe.groupby(level='symbol')[f'r{t:02}'].shift(-t)
universe[f'r{t:02}dec_fwd'] = universe.groupby(level='symbol')[f'r{t:02}dec'].shift(-t) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Store Model Data | universe = universe.drop(ohlcv, axis=1)
universe.info(null_counts=True)
drop_cols = ['r01', 'r01dec', 'r05', 'r05dec']
outcomes = universe.filter(like='_fwd').columns
universe = universe.sort_index()
with pd.HDFStore('data.h5') as store:
store.put('features', universe.drop(drop_cols, axis=1).drop(outcomes, axis=1).loc[idx[:, '2001':], :])
store.put('targets', universe.loc[idx[:, '2001':], outcomes]) | _____no_output_____ | MIT | 18_convolutional_neural_nets/05_engineer_cnn_features.ipynb | driscolljt/Machine-Learning-for-Algorithmic-Trading-Second-Edition_Original |
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app. | # Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0 | Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0)
Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.11.4)
Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.16)
Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1)
Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5)
Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4)
Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0)
Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.1)
Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.18.1)
Requirement already satisfied: botocore<1.21.0,>=1.20.16 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.16)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.16->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.16->boto3>=1.14.12->sagemaker==1.72.0) (1.25.10)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (2.2.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3)
Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.6)
Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (1.14.0)
Requirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (45.2.0.post20200210)
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011. | %mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data | mkdir: cannot create directory ‘../data’: File exists
--2021-03-07 19:37:15-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 23.8MB/s in 4.6s
2021-03-07 19:37:20 (17.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set. | import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg']))) | IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records. | from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) | IMDb reviews (combined): train = 25000, test = 25000
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly. | print(train_X[100])
print(train_y[100]) | Think of this pilot as "Hawaii Five-O Lite". It's set in Hawaii, it's an action/adventure crime drama, lots of scenes feature boats and palm trees and polyester fabrics and garish shirts...it even stars the character actor "Zulu" in a supporting role. Oh, there are some minor differences - Roy Thinnes is supposed to be some front-line undercover agent, and the supporting cast is much smaller (and less interesting), but basically the atmosphere is still the same. Problem is, "Hawaii Five-O" (another QM product) already existed at the time and had run for years. It filled the market demand for Hawaii-based crime dramas quite adequately. Code Name: Diamond Head may have been intended as the hier to H50 as the older series eventually dwindled away...but it comes across as a superfluous, 2nd rate copy. It doesn't suck, but it's completely derivative and doesn't do anything as well as the original.<br /><br />There is some decent acting talent involved here. Thinnes is an old pro, and he gives the role his best shot, and he isn't bad. But Thinnes is only as good as his material and his director. Ian McShane is in here as an evil spy master named "Tree", and McShane tends to be the most interesting actor in any scene he appears in. But he's phoning his part in here. Frances Ngyuen is reasonably exotic looking, but her astounding skinniness, opaque features, thick accent and wooden delivery aren't the stuff of which dreams are made. Relying on her to supply the 'romantic interest' for Thinnes was probably the series' biggest mistake. At least for for a series aimed at white audiences brought up with Marsha Brady and Peggy Lee as our love goddesses. Give her another 30 lbs and a year with a dialog/voice coach, and she might cut it. Zulu is, well, his usual self - enjoyable in bit parts, but he isn't a person who can carry a feature by himself. <br /><br />In addition, the plot and dialog are strictly by-the-numbers, with nothing to distinguish them from any other Quinn Martin production. And by this point, the American TV audience had seen a whoooole lot of QM productions....I think "CN: DH" was one too many, and it sank without a trace. It wasn't the really the actors' fault, and I hope they walked away from this with a decent paycheck and one more entry on their C.V.s. <br /><br />MST3000 revived this for their treatment in their sixth season, and they had a lot of good natured fun with it. Worth seeking out in that version if you enjoy the MST approach to movie japery and lampoon, but I can't imagine anyone caring about this pilot for any other reason.
0
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis. | import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set. | # TODO: Apply review_to_words to a review (train_X[100] or any other review)
print('Original review:')
print(train_X[100])
print('Tokenized review:')
print(review_to_words(train_X[100])) | Original review:
Think of this pilot as "Hawaii Five-O Lite". It's set in Hawaii, it's an action/adventure crime drama, lots of scenes feature boats and palm trees and polyester fabrics and garish shirts...it even stars the character actor "Zulu" in a supporting role. Oh, there are some minor differences - Roy Thinnes is supposed to be some front-line undercover agent, and the supporting cast is much smaller (and less interesting), but basically the atmosphere is still the same. Problem is, "Hawaii Five-O" (another QM product) already existed at the time and had run for years. It filled the market demand for Hawaii-based crime dramas quite adequately. Code Name: Diamond Head may have been intended as the hier to H50 as the older series eventually dwindled away...but it comes across as a superfluous, 2nd rate copy. It doesn't suck, but it's completely derivative and doesn't do anything as well as the original.<br /><br />There is some decent acting talent involved here. Thinnes is an old pro, and he gives the role his best shot, and he isn't bad. But Thinnes is only as good as his material and his director. Ian McShane is in here as an evil spy master named "Tree", and McShane tends to be the most interesting actor in any scene he appears in. But he's phoning his part in here. Frances Ngyuen is reasonably exotic looking, but her astounding skinniness, opaque features, thick accent and wooden delivery aren't the stuff of which dreams are made. Relying on her to supply the 'romantic interest' for Thinnes was probably the series' biggest mistake. At least for for a series aimed at white audiences brought up with Marsha Brady and Peggy Lee as our love goddesses. Give her another 30 lbs and a year with a dialog/voice coach, and she might cut it. Zulu is, well, his usual self - enjoyable in bit parts, but he isn't a person who can carry a feature by himself. <br /><br />In addition, the plot and dialog are strictly by-the-numbers, with nothing to distinguish them from any other Quinn Martin production. And by this point, the American TV audience had seen a whoooole lot of QM productions....I think "CN: DH" was one too many, and it sank without a trace. It wasn't the really the actors' fault, and I hope they walked away from this with a decent paycheck and one more entry on their C.V.s. <br /><br />MST3000 revived this for their treatment in their sixth season, and they had a lot of good natured fun with it. Worth seeking out in that version if you enjoy the MST approach to movie japery and lampoon, but I can't imagine anyone caring about this pilot for any other reason.
Tokenized review:
['think', 'pilot', 'hawaii', 'five', 'lite', 'set', 'hawaii', 'action', 'adventur', 'crime', 'drama', 'lot', 'scene', 'featur', 'boat', 'palm', 'tree', 'polyest', 'fabric', 'garish', 'shirt', 'even', 'star', 'charact', 'actor', 'zulu', 'support', 'role', 'oh', 'minor', 'differ', 'roy', 'thinn', 'suppos', 'front', 'line', 'undercov', 'agent', 'support', 'cast', 'much', 'smaller', 'less', 'interest', 'basic', 'atmospher', 'still', 'problem', 'hawaii', 'five', 'anoth', 'qm', 'product', 'alreadi', 'exist', 'time', 'run', 'year', 'fill', 'market', 'demand', 'hawaii', 'base', 'crime', 'drama', 'quit', 'adequ', 'code', 'name', 'diamond', 'head', 'may', 'intend', 'hier', 'h50', 'older', 'seri', 'eventu', 'dwindl', 'away', 'come', 'across', 'superflu', '2nd', 'rate', 'copi', 'suck', 'complet', 'deriv', 'anyth', 'well', 'origin', 'decent', 'act', 'talent', 'involv', 'thinn', 'old', 'pro', 'give', 'role', 'best', 'shot', 'bad', 'thinn', 'good', 'materi', 'director', 'ian', 'mcshane', 'evil', 'spi', 'master', 'name', 'tree', 'mcshane', 'tend', 'interest', 'actor', 'scene', 'appear', 'phone', 'part', 'franc', 'ngyuen', 'reason', 'exot', 'look', 'astound', 'skinni', 'opaqu', 'featur', 'thick', 'accent', 'wooden', 'deliveri', 'stuff', 'dream', 'made', 'reli', 'suppli', 'romant', 'interest', 'thinn', 'probabl', 'seri', 'biggest', 'mistak', 'least', 'seri', 'aim', 'white', 'audienc', 'brought', 'marsha', 'bradi', 'peggi', 'lee', 'love', 'goddess', 'give', 'anoth', '30', 'lb', 'year', 'dialog', 'voic', 'coach', 'might', 'cut', 'zulu', 'well', 'usual', 'self', 'enjoy', 'bit', 'part', 'person', 'carri', 'featur', 'addit', 'plot', 'dialog', 'strictli', 'number', 'noth', 'distinguish', 'quinn', 'martin', 'product', 'point', 'american', 'tv', 'audienc', 'seen', 'whooool', 'lot', 'qm', 'product', 'think', 'cn', 'dh', 'one', 'mani', 'sank', 'without', 'trace', 'realli', 'actor', 'fault', 'hope', 'walk', 'away', 'decent', 'paycheck', 'one', 'entri', 'c', 'v', 'mst3000', 'reviv', 'treatment', 'sixth', 'season', 'lot', 'good', 'natur', 'fun', 'worth', 'seek', 'version', 'enjoy', 'mst', 'approach', 'movi', 'japeri', 'lampoon', 'imagin', 'anyon', 'care', 'pilot', 'reason']
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer:** The mentioned function also remove articles, connectives, common vebs like "to be", possesives an othe grammatical tools not relevant to detect sentiment in the sentence. Additionaly vectorized it. The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time. | import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) | Read preprocessed data from cache file: preprocessed_data.pkl
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'. | import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
# Solution:
for sentence in data:
for word in sentence:
word_count[word]=word_count.get(word,0)+1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = None
# Solution:
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:**The most common tokenized words apearing in the training set are 'movi', 'film', 'one', 'like' and 'time'. The first two words are quite obvious, _movies_ and _films_ are the topics of the reviews. The rest three are frequent in english: _one_ could be use to avoid repeating the movie name, _like_ could be use in a positive and negative review and _time_ might be just common. | # TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict)[0:5] | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use. | data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`. | def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set? | # Use this cell to examine one of the processed reviews to make sure everything is working as intended.
n_sample=15
print(train_X[n_sample])
print(len(train_X[n_sample])) | [ 641 4 174 2 56 47 8 175 2663 168 2 19 5 1
632 341 154 4 1 1 349 977 82 1108 134 60 3756 1
189 111 1408 17 320 13 672 2529 501 1 551 1 1 85
318 52 1632 1 1438 1 3416 85 3441 258 718 296 1 130
31 82 7 25 892 496 212 214 91 51 56 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
500
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** It's important to use the same function to both proccesses in order to assure there will be no missalignment in the codification. Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review. | import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model. | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below. | !pygmentize train/model.py | [34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [34mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving. | import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later. | def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
# Reference https://towardsdatascience.com/lstm-text-classification-using-pytorch-2c6c657f8fc0
# Solution:
optimizer.zero_grad()
output = model(batch_X)
loss=loss_fn(output, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader))) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose. | import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device) | Epoch: 1, BCELoss: 0.6889122724533081
Epoch: 2, BCELoss: 0.6780008792877197
Epoch: 3, BCELoss: 0.6685242891311646
Epoch: 4, BCELoss: 0.6583548784255981
Epoch: 5, BCELoss: 0.6465497970581054
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file. | from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
py_version="py3",
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data}) | 'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model. | # TODO: Deploy the trained model
# Solution:
# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = estimator.deploy(instance_type='ml.m4.xlarge',
initial_instance_count=1)
| Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is. | test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** It was quite good results for the pytorch model in comparison with the XGBoost. The advantage of the pytorch model is the possibility of model specifically to the application. The advantage of the XGBoost is that is ready to work.Since the pytorch model is tailored to the application, should perform better. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model. | test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.' | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`. | # TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data=[]
test_data, test_data_len = convert_and_pad_data(word_dict, [review_to_words(test_review)])
test_data_full = pd.concat([pd.DataFrame(test_data_len), pd.DataFrame(test_data)], axis=1)
print(test_data_full)
len(test_data_full) | 0 0 1 2 3 4 5 6 7 8 ... 490 491 492 493 \
0 20 1 1376 49 53 3 4 878 173 392 ... 0 0 0 0
494 495 496 497 498 499
0 0 0 0 0 0 0
[1 rows x 501 columns]
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review. | predict(test_data_full.values) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it. | estimator.delete_endpoint() | estimator.delete_endpoint() will be deprecated in SageMaker Python SDK v2. Please use the delete_endpoint() function on your predictor instead.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided. | !pygmentize serve/predict.py | [34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36moptim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[36mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[36mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[36mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[36mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output:[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[36mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [34mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X = [34mNone[39;49;00m
data_len = [34mNone[39;49;00m
[37m# SOLUTION:[39;49;00m
data_X, data_len = convert_and_pad(model.word_dict, review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
result = [34mNone[39;49;00m
[37m# Solution:[39;49;00m
result = [36mround[39;49;00m(model(data).item())
[34mreturn[39;49;00m result
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data. | from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') | Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive. | import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
As an additional test, we can try sending the `test_review` that we looked at earlier. | predictor.predict(test_review) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below. | predictor.endpoint | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:**Review: The special effects are magnificents. The ships looks so real and credible. Would see it thousand times.Result: Your review was POSITIVE! Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill. | predictor.delete_endpoint() | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | simonmijares/Sagemaker |
Submission Instructions | # Now click the 'Submit Assignment' button above. | _____no_output_____ | MIT | Informatics/Deep Learning/TensorFlow - deeplearning.ai/2. CNN/utf-8''Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | MarcosSalib/Cocktail_MOOC |
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. | %%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000); | _____no_output_____ | MIT | Informatics/Deep Learning/TensorFlow - deeplearning.ai/2. CNN/utf-8''Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | MarcosSalib/Cocktail_MOOC |
`Microstripline` object in `structure` module. Analytical modeling of Microstripline in Scikit-microwave-design.In this file, we show how `scikit-microwave-design` library can be used to implement and analyze basic microstrip line structures. Defining a microstrip line in `skmd`There are two ways in which we can define a microstrip line (msl). 1. We define the msl width. And then compute its characteristic impedance in the analytical formulation.2. We define the characteristic impedance of the msl, and then compute the physical dimension that gives the desired characteristic impedance. In both the methods, the effective dielectric constant becomes a function of the msl width - in addition to the substrate thickness and substrate dielectric constant. And since the effective dielectric constant is one of the determining factors in the propagation constant, the propagation constant is also dependent on the characteristics impedance indirectly. In `skmd` you can define a microstrip line by both the methods. | import numpy as np
import skmd as md
import matplotlib.pyplot as plt
### Define frequency
pts_freq = 1000
freq = np.linspace(1e9,3e9,pts_freq)
omega = 2*np.pi*freq
#### define substrate
epsilon_r = 10.8 # dielectric constant or the effective dielectric constant
h_subs = 1.27*md.MILLI # meters.
| _____no_output_____ | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
1. Defining msl with characteristic impedance. | msl1 = md.structure.Microstripline(er=epsilon_r,h=h_subs,Z0=93,text_tag='Line-abc')
| ============
Defining Line-abc
Line-abc defined with Z0
==============
| BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
`Microstripline` object is defined in the `structure` module of the `skmd` librayr. With the above command, we have defined a _msl_ by giving the characteristic impedance $Z_0$ with a text identifier 'Line-abc'. The library will compute the required line width to achieve the desired characteristic impedance for the given values of substrate thickness and dielectric constant. Therefore this code can also be used to get the design parameters for desired specifications. The computed width of the microstrip line is stored in the attribute `w` of the `Microstripline` object, and can be displayed by print(msl1.w). The units are in meters. You can also print all the specifications by `msl1.print_specs()`. | msl1.print_specs() | --------- Line-abc Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.00019296747453793648
Characteristics impedance= 93
Length of the line = 1
Effective dielectric constant er_eff = 6.555924417931664
Frequency defined ?: False
-------------------
| BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
2. Defining the msl by width. We can also define the msl by giving the width at the time of definition. The characteristic impedance will be computed by the code in this case. | msl2 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w = 1.1*md.MILLI,text_tag='Line-xyz')
msl2.print_specs() | ============
Defining Line-xyz
Line-xyz defined with width.
==============
--------- Line-xyz Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.0011
Characteristics impedance= 50.466917262179905
Length of the line = 1
Effective dielectric constant er_eff = 7.12610312997174
Frequency defined ?: False
-------------------
| BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
At least either width or characteristic impedance must be defined, else an error will be generated. If both characteristic impedance and width are given, than width is used in the definitiona and characertistic impedance is computed. Defining frequency range and network parameters for the microstrip line. We can also give the frequency values at which we want to perform the analysis. When frequency values are given, the corresponding two-port microwave `network` object also gets defined for the microstrip transmission line. If the length of the transmission line is not defined, a default length of 1 meter is considered. The frequency can be defined at the time of `Microstripline` definition, or can be added later using the object function `fun_add_frequency(omega)`. However, it is recommended to be defined during the initial object definition itself. | msl3 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w = 1.1*md.MILLI,omega = omega,text_tag='Line-with-frequency')
msl3.print_specs()
# msl.
msl2.print_specs()
msl2.fun_add_frequency(omega)
| --------- Line-xyz Specifications---------
-----Substrate-----
Epsilon_r 10.8
substrate thickness 0.00127
-------------------
line width W= 0.0011
Characteristics impedance= 50.466917262179905
Length of the line = 1
Effective dielectric constant er_eff = 7.12610312997174
Frequency defined ?: False
-------------------
Frequency added (override old values). Network defined.
| BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
Microstrip-line filters. Designing microstrip line filters and their analytical computation becomes very simple in `scikit-microwave-design` library. Since a microwave network object is created for a microstrip-line section, it becomes a matter of few lines of coding to implement and test filters. In addition excellent plotting features available in the `plot` module of the `skmd` library make visualization of the filer response very easy. Open quarter-stub filter. Let us design a T-shaped open stub filter, which acts as notch filter at quarter wavelength. If the resonant frequency is $f_0$, then the length of the open-stub corresponding to the resonant frequency will be given by,$l_{stub} = \frac{\lambda_0}{4}$where, $\lambda_0 = \frac{c}{f_0\sqrt{\epsilon_{eff}}}$Here, note that $\epsilon_{eff}$ is the effective dielectric constant of the substrate for a given width. If the characteristic impedance - and the corresponding width - changes then the effective dielectric constant, and therefore the effective electrical length of the stub will also change. Using this library it is very easy to take care of these issues. For example, the library can easily compute for us the required stub-length, for a desired combination of characteristic impedance and resonant frequency. The following codes shows the implementation of a simple quarter-stub filter with for different values of characteristic impedances, the corresponding stub widths, and the required stub lengths to keep the resonant frequency fixed. | f0 = 1.5*md.GIGA
omega0 = md.f2omega(f0)
msl_Tx1 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=1.1*md.MILLI,l=5*md.MILLI,text_tag='Left-line',omega=omega)
msl_Tx2 = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=1.1*md.MILLI,l=5*md.MILLI,text_tag='Right-line',omega=omega)
msl_Tx1.print_specs()
w_stub = 1*md.MILLI
lambda_g_stub = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=w_stub,text_tag='stub-resonant-length',omega=omega0).lambda_g
w_stub = 1*md.MILLI
L_stub = lambda_g_stub/4
msl_stub = md.structure.Microstripline(er=epsilon_r,h=h_subs,w=w_stub,l=L_stub,text_tag='stub',omega=omega)
def define_NW_for_stub(msl_stub,ZL_stub):
Y_stub = msl_stub.NW.input_admittance(1/ZL_stub)
NW_stub = md.network.from_shunt_Y(Y_stub)
return NW_stub
ZL_stub = md.OPEN
NW_stub = md.network.from_shunt_Y(msl_stub.NW.input_admittance(1/ZL_stub))
NW_filter = msl_Tx1.NW*NW_stub*msl_Tx2.NW
## Plot commands
fig1 = plt.figure('LPF')
ax1_f1 = fig1.add_subplot(111)
ax1_f1.plot(omega/(2*np.pi*md.GIGA),np.abs(NW_filter.S11),linewidth='3',label='$|S_{11}|$')
ax1_f1.plot(omega/(2*np.pi*md.GIGA),np.abs(NW_filter.S21),linewidth='3',label='$|S_{21}|$')
ax1_f1.grid(1)
ax1_f1.legend()
fig2 = plt.figure('LPF-mag-phase')
ax1_f2 = fig2.add_subplot(311)
ax1_cmap_f2 = fig2.add_axes([0.92, 0.1, 0.02, 0.7])
ax2_f2 = fig2.add_subplot(313)
md.plot.plot_colored_line(md.omega2f(omega)/md.GIGA,np.abs(NW_filter.S11),np.angle(NW_filter.S11)*180/np.pi,ax=ax1_f2,color_axis = ax1_cmap_f2)
md.plot.plot_colored_line(md.omega2f(omega)/md.GIGA,np.abs(NW_filter.S21),np.angle(NW_filter.S21)*180/np.pi,ax=ax2_f2)
ax1_f2.grid(1)
ax2_f2.grid(1)
fig3 = plt.figure('LPF-mag-phase-dB')
ax1_f3 = fig3.add_subplot(311)
ax1_cmap_f3 = fig3.add_axes([0.92, 0.5, 0.02, 0.3])
ax2_f3 = fig3.add_subplot(313)
ax2_cmap_f3 = fig3.add_axes([0.92, 0.1, 0.02, 0.3])
md.plot.plot_colored_line(md.omega2f(omega)/md.GIGA,md.dB_mag(NW_filter.S21),np.angle(NW_filter.S21)*180/np.pi,ax=ax1_f3,color_axis = ax1_cmap_f3)
md.plot.plot_colored_line(md.omega2f(omega)/md.GIGA,np.rad2deg(np.angle(NW_filter.S21)),md.dB_mag(NW_filter.S21),ax=ax2_f3,color_axis = ax2_cmap_f3)
ax1_f3.grid(1)
ax2_f3.grid(1)
fig4 = plt.figure('Smith-chart')
ax1_f4 = md.plot.plot_smith_chart(md.omega2f(omega)/md.GIGA,NW_filter.S21,fig4,use_colormap='inferno',linewidth=10)
# ax1_f4 = md.plot.plot_smith_chart(md.omega2f(omega)/md.GIGA,NW_filter.S11,fig4,use_colormap='inferno',linewidth=10)
# snap_cursor_2 = md.plot.SnaptoCursor_polar(ax1_f4,md.omega2f(omega), NW_filter.S21)
# fig4.canvas.mpl_connect('motion_notify_event', snap_cursor_2.mouse_move) | _____no_output_____ | BSD-3-Clause | tests/Demo_Microstripline.ipynb | sarang-IITKgp/scikit-microwave-design |
`Sampler` | import sys
sys.path.append('../..')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import pandas as pd | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Intro Welcome! In this section you'll learn about `Sampler`-class. Instances of `Sampler` can be used for flexible sampling of multivariate distributions.To begin with, `Sampler` gives rise to several building-blocks classes such as- `NumpySampler`, or `NS`- `ScipySampler` - `SS`What's more, `Sampler` incorporates a set of operations on `Sampler`-instances, among which are- "`|`" for building a mixture of two samplers: `s = s1 | s2`- "`&`" for setting a mixture-weight of a sampler: `s = 0.6 & s1 | 0.4 & s2`- " `truncate`" for truncating the support of underlying sampler's distribution: `s.truncate(high=[1.0, 1.5])`- ..all arithmetic operations: `s = s1 + s2` or `s = s1 + 0.5`These operations can be used for combining building-blocks samplers into complex multivariate-samplers, just like that: | from batchflow import NumpySampler as NS
# truncated normal and uniform
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4
ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1)
ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0)
ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4)
.apply(lambda m: m.astype(np.int)) / 4 + (0, 3))
# a mixture of all four
ns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4
# take a look at the heatmap of our sampler:
h = np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)
plt.imshow(h[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Building `Samplers` 1. Numpy, Scipy, TensorFlow - `Samplers` To build a `NumpySampler`(`NS`) you need to specify a name of distribution from `numpy.random` (or its [alias](https://github.com/analysiscenter/batchflow/blob/master/batchflow/sampler.pyL15)) and the number of independent dimensions: | from batchflow import NumpySampler as NS
ns = NS('n', dim=2) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
take a look at a sample generated by our sampler: | smp = ns.sample(size=200)
plt.scatter(*np.transpose(smp)) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
The same goes for `ScipySampler` based on `scipy.stats`-distributions, or `SS` ("mvn" stands for multivariate-normal): | from batchflow import ScipySampler as SS
ss = SS('mvn', mean=[0, 0], cov=[[2, 1], [1, 2]]) # note also that you can pass the same params as in
smp = ss.sample(2000) # scipy.sample.multivariate_normal, such as `mean` and `cov`
plt.scatter(*np.transpose(smp)) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
2. `HistoSampler` as an estimate of a distribution generating a cloud of points `HistoSampler`, or `HS` can be used for building samplers, with underlying distributions given by a histogram. You can either pass a `np.histogram`-output into the initialization of `HS` | from batchflow import HistoSampler as HS
histo = np.histogramdd(ss.sample(1000000))
hs = HS(histo)
plt.scatter(*np.transpose(hs.sample(150))) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
...or you can specify empty bins and estimate its weights using a method `HS.update` and a cloud of points: | hs = HS(edges=2 * [np.linspace(-4, 4)])
hs.update(ss.sample(1000000))
plt.imshow(hs.bins, interpolation='bilinear') | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
3. Algebra of `Samplers`; operations on `Samplers` `Sampler`-instances support artithmetic operations (`+`, `*`, `-`,...). Arithmetics works on either* (`Sampler`, `Sampler`) - pair* (`Sampler`, `array-like`) - pair | # blur using "+"
u = NS('u', dim=2)
noise = NS('n', dim=2)
blurred = u + noise * 0.2 # decrease the magnitude of the noise
both = blurred | u + (2, 2)
plt.imshow(np.histogramdd(both.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
You may also want to truncate a sampler's distribution so that sampling points belong to a specific region. The common use-case is to sample normal points inside a box...or, inside a ring: | n = NS('n', dim=2).truncate(3, 0.3, expr=lambda m: np.sum(m**2, axis=1))
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Not infrequently you need to obtain "normal" sample in integers. For this you can use `Sampler.apply` method: | n = (4 * NS('n', dim=2)).apply(lambda m: m.astype(np.int)).truncate([6, 6], [-6, -6])
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Note that `Sampler.apply`-method allows you to add an arbitrary transformation to a sampler. For instance, [Box-Muller](https://en.wikipedia.org/wiki/Box–Muller_transform) transform: | bm = lambda vec2: np.sqrt(-2 * np.log(vec2[:, 0:1])) * np.concatenate([np.cos(2 * np.pi * vec2[:, 1:2]),
np.sin(2 * np.pi * vec2[:, 1:2])], axis=1)
n = NS('u', dim=2).apply(bm)
plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0]) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Another useful thing is coordinate stacking ("&" stands for multiplication of distribution functions): | n, u = NS('n'), SS('u') # initialize one-dimensional notrmal and uniform samplers
s = n & u # stack them together
s.sample(3) | _____no_output_____ | Apache-2.0 | examples/tutorials/07_sampler.ipynb | abrikoseg/batchflow |
Subsets and Splits